Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!!
Kia ora,
Welcome to the May edition of the Digital Identity NZ newsletter. This month, we’re excited to introduce our new Executive Director, share insights from Techweek25’s Foundations for Tomorrow Event.
Andy Higgs Appointed as New Executive Director
We’re pleased to announce that Andy Higgs has joined Digital Identity NZ as our new Executive Director.
Andy brings over 20 years of experience across digital identity, AI strategy, and innovation in both public and private sectors. His background includes leadership roles at Futureverse and Centrality, where he focused on self-sovereign identity solutions and ecosystem partnerships.
His experience extends to policy development with the Department of Internal Affairs and Ministry of Business, Innovation and Employment, including work on the Digital Identity Services Trust Framework and the Consumer Data Right.
Andy’s collaborative approach will be valuable as DINZ continues to work alongside members to build a trusted digital identity ecosystem for everyone in Aotearoa.
Digital Public Infrastructure: Foundations for Tomorrow Event
During Techweek25, government, industry, and public sector leaders gathered at Parliament’s Legislative Chamber to discuss how digital public infrastructure (DPI) could transform service delivery across New Zealand.
Key takeaways for the digital identity community:
Read the full event recap here.
Member News
Our DINZ community continues to grow! We’re delighted to welcome POLipay as a member and look forward to featuring and engaging them in our ecosystem.
See all organisation members here.
Stay Connected
Thank you for being part of our community. We look forward to sharing more updates next month.
Ngā mihi nui,
The team at Digital Identity NZ
Read full news here: Introducing Our New Executive Director | May Newsletter
SUBSCRIBE FOR MOREThe post Introducing Our New Executive Director | May Newsletter appeared first on Digital Identity New Zealand.
ABSTRACT: “Fair Witnessing” is a new approach for asserting and interpreting digital claims in a way that mirrors real-world human trust: through personal observation, contextual disclosure, and progressive validation. It can be implemented with the decentralized architecture of Gordian Envelopes to allow individuals to make verifiable statements while balancing privacy, accountability, and interpretability. At its core, fair witnessing is not about declaring truth, it’s about showing your work.
In the early days of decentralized identity, we referred to what we were working on as “Verifiable Claims.” The idea was simple: let people make cryptographically signed statements and allow others to verify them. But something unexpected happened. People assumed these claims would settle arguments or stop disinformation. They saw the term “verifiable” and equated it with “truth.”
The reality was more modest: we could verify the source of a claim but not its accuracy. We could assert that a claim came from a specific person or organization (or even camera or other object) but not whether that claim was unbiased, well-observed, or contextually complete.
This misunderstanding revealed a deeper problem: how do we represent what someone actually saw and how they saw it, in a way that honors the complexity of human trust?
A Heinleinian InspirationIin Stranger in a Strange Land, Robert Heinlein described a special profession: the Fair Witness. A Fair Witness would be trained to observe carefully, report precisely, make no assumptions, and avoid bias. If asked what color a house was, a Fair Witness would respond, “It appears to be white on this side.”
It is this spirit we want to capture to fulfill the promise of the original verifiable claims.
A Fair Witness in our digital era is someone who not only asserts a claim but also shares the conditions under which it was made, including context, methodology, limitations, and bias:
What were the physical conditions of the observation? Was the observer physically present? Did they act independently? What interests or leanings might have shaped their perception? How did they minimize those biases?These are not just nice-to-haves. They are necessary components of evaluating a claim’s credibility.
Beyond Binary TrustFair witnessing challenges binary notions of trust. Traditional systems ask a “yes” or “no” question: do you trust this certificate? This issuer?
But trust is rarely binary like this in the real world. It is layered, contextual, and progressive. The claim made by a pseudonymous environmental scientist might start out with low trust but could grow in credibility as:
They reveal their professional history. Others endorse their work. They disclose how they mitigated their potential biases.Trust builds over time, not in a single transaction. That’s progressive trust.
Trust as a Nested StatementTo marry a fair witness claim to the notion of progressive trust requires the nesting of information. As shown in the example of the environmental scientist, the witnessing of an observation gains weight as the context is added: turning the scientist’s claims into a fair-witness statement required collecting together information about who the scientist is, what their training is, and what their peers think of them.
But as noted, progressive trust isn’t something that occurs in a single transaction: it’s revealed over time. We don’t want it to all be revealed at once, because that could result in information overload for someone consulting a claim and could have privacy implications for the witness.
A progressive trust model of fair witnessing requires that you show what you must and that you withhold what’s not needed—until it is.
Privacy and Accountability, TogetherThis model strikes a crucial balance. On one hand, it empowers individuals (fair witnesses) to speak from experience without needing permission from a centralized authority. On the other hand, it allows others to verify the integrity of the claim without requiring total exposure.
There are numerous use cases:
You can prove you were trained without revealing your name. You can demonstrate personal observation without revealing your exact location. You can commit to a fact today and prove you knew it later. Fair Witnessing with Gordian EnvelopeThe demands of Fair Witnessing go beyond the capabilities of traditional verifiable credentials (VCs), primarily because VCs can’t remove signed information but maintain its validation—and the ability to do so is critically important if you want to nest information for revelation over time.
Fortunately, a technology already exists that provides this precise capability: Blockchain Commons’ Gordian Envelope, which allows for: the organized storage of information; the validation of that information through signatures; the elision of that information; the continued validation of the information after elision; and the provable restoration of that information.
Any subject, predicate, or object in Gordian Envelope can itself be a claim, optionally encrypted or elided. This enables a deeply contextual, inspectable form of expression.
For example:
Alice could make a fair-witness observation, which would be an envelope. Information on the context of Alice’s assertion can be a sub-envelope. A credential for fair witness training can be a sub-envelope. Endorsements of Alice’s work as a fair witness can be sub-envelopes. Endorsements, credentials, and even the entire envelope can be signed by the appropriate parties. Any envelope or sub-envelope can be elided, without affecting these signatures and without impacting the ability to provably restore the data later.It’s progressive trust appropriate for use with fair witnessing in an existing form!
Toward a New EpistemologyBeing a Fair Witness isn’t about declaring truth. It’s about saying what’s known, with context, so others can assess what’s truth. Truth, in this model, is interpreted, not imposed. A verifier—or a jury—decides if a claim is credible, not because a central authority says so, but because the Fair Witness has provided information with sufficient context and endorsements.
In other words, fair witnessing is not about what is true, but about how we responsibly say what we believe to be true—and what others can do with that.
This is epistemology (the theory of knowledge) that’s structured as a graph. It’s cryptographically sound, privacy-respecting, and human-auditable. It reflects real-world trust: messy, contextual, and layered. By modeling that complexity rather than flattening it, we gain both rigor and realism.
ConclusionIn a world of machine-generated misinformation, ideological polarization, and institutional distrust, we must return to the foundations: observation, context, and human responsibility.
Fair witnessing offers a new path forward—one that is verifiable, privacy-respecting, and grounded in how humans actually trust.
Learn more: [ Progressive Trust Gordian Envelope ]Marie Jordan – OpenID Foundation Secretary
About the OpenID FoundationThe OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.
The post Notice of Vote for Proposed OpenID Attachments 1.0 Final Specification first appeared on OpenID Foundation.
After decades of cautiously watching from the sidelines, governments around the world have started investing in, rolling out, and regulating digital identity systems on aggressive timelines. These foundational changes to government infrastructure and the economy are happening largely outside public awareness, despite their generational consequences for privacy.
Digital identity systems implemented by governments today will shape privacy for decades. Whatever ecosystems and technical architectures are established in the coming years could ossify quickly, and it would take enormous political will to make changes at such a foundational level if society develops buyer's remorse once the ripple effects become clear.
That's why nearly 100 experts across technology, policy, and civil liberties have united around one principle: digital identity systems must be built without latent tracking capabilities that could enable ubiquitous surveillance.
Who's Behind ThisCivil society groups working on legal advocacy and industry oversight (ACLU, EFF), cybersecurity experts (including Bruce Schneier), privacy-by-design software companies of various sizes (Brave, many DIF members), and experts from university faculties (Brown, Columbia, Imperial College London) all signed on. The list includes authors of collaborative open standards, chief executives, state privacy officers, and other public servants. This is not a coalition of "activists" so much as a broad coalition of experts and policy-watchers sounding an alarm about consequential decisions passing largely unnoticed by the average citizen and end-user.
The breadth of this coalition reflects widespread concern about the technical and policy implications of embedded tracking capabilities.
What "Phone Home" MeansAs a general rule, "phone-home" is a shorthand for architectural principles of tracking enablement (just as "no phone-home" refers to tracking mitigation, broadly speaking). When a verifier of credentials interacts directly with the credential's issuer—even if just to check validity or revocation status—they are "phoning" the credential's "home." This opens the subject and/or the holder of that credential to privacy risks, no matter how well the request is anonymized or handled. These API connections create data that can be combined, correlated, and abused, especially when verifiers share information or when issuers abuse their role.
The risks multiply when applied across domains. Federated protocols developed for use within organizations become surveillance systems when used between different sectors or jurisdictions. Phone home capabilities that seem innocuous within a single domain can become tools for tracking and control when applied broadly without aggressive oversight and fine-tuning. Over time, little mismatches and slippages in how these protocols work get exploited and stretched, amplifying glitches.
In the worst-case scenario, some systems enable real-time revocation decisions, giving issuers—potentially governments—immediate control over citizens' ability to access services, travel, or participate in society. A natural tendency to "over-request" foundational documents in situations where such strong identification is unjustified is amplified by familiarity, lack of friction, and other UX catnip; all the SHOULDs in the world won't stop verifiers from doing it. And verifiers over-asking without also providing a fallback or "slow lane" can make a sudden or temporary unavailability of foundational credentials painful or even exclusionary. The side-effects and externalities pile up dangerously in this industry!
Technologists see these kinds of capabilities (phone-home of any kind, remote revocation, low-friction foundational identity requests) like loaded guns in Act 1 of a Chekhov play: "If this capability exists within a digital identity system, even inactively, it will eventually be misused."
The Scale and Timing ProblemMost foundational identity systems being implemented for national-scale deployment include system-wide phone home tracking capabilities, either actively or latently. Many policymakers involved in these rollouts are not even aware of the tracking potential built into the standards they are adopting.
Four factors make this moment critical:
Scale of deployment: These systems will serve billions of users across developed nations, effectively replacing physical credentials. Precedent-setting effects: When one jurisdiction adopts tracking-enabled systems, it influences global practices and standards. Infrastructure persistence: Technical decisions made today will persist for decades, becoming prohibitively expensive to change once embedded. Mission creep inevitability: Capabilities developed for legitimate purposes like fraud prevention naturally accrue new private-sector and/or public-sector use-cases over time due to natural market pressures. Today's private-sector usage is tomorrow's public-sector secondary data market. The Fallacy of "Privacy by Policy"The fundamental problem with latent tracking capabilities is that policies change, but technical architecture persists. If a system has surveillance capability—even if unused—it will eventually be activated. Emergencies, changing administrations, or shifting political priorities can quickly justify "pressing the button" to enable widespread tracking.
The solution is simple: they cannot press a button they do not have.
Consider AAMVA's recent decision to prohibit the "server retrieval" capability throughout the U.S.—a positive step that we welcome. However, most low-level implementations (e.g. core libraries) will likely implement the entire specification and leave it to the last-mile implementers to honor (or not) this policy. As an incubator of new specifications and prototypes, DIF feels strongly that jurisdiction-by-jurisdiction policies is just "turning off" what the specification still instructs software to implement for later policies to turn back on at the flick of a switch. We believe the underlying ISO specification needs to remove "server retrieval" completely, lest every authority in the U.S. remain one emergency away from activating broad, identity-based surveillance of all citizens.
Privacy-Preserving Alternatives ExistThe choice between security and privacy is false. Offline-first verification operates without server communication—the credential contains cryptographic proofs that can be validated independently. The ISO 18013-5 standard itself includes "device retrieval" mode, a privacy-preserving alternative that functions entirely offline.
Even credential revocation can be implemented without phone home capabilities. Privacy-preserving revocation systems are in production today, proving that security and privacy can coexist.
The technology exists. The standards exist. What has been missing is commitment to prioritize privacy over the operational convenience of centralized tracking.
Moving ForwardAwareness is growing. We welcome developments like AAMVA's prohibition of server retrieval, but more work is needed across the broader digital identity ecosystem to eliminate latent surveillance capabilities entirely.
The Decentralized Identity Foundation develops standards that prioritize privacy, supports implementations that respect user autonomy, and advocates for technical architectures that prevent tracking and add friction to data misuse. Our membership includes many technologists and vendors designing tracking-free alternatives for these and other use cases.
We encourage you to read the full No Phone Home statement at https://nophonehome.com. Whether you are building, deploying, or using these systems, your voice matters at this critical juncture.
The question is not whether we can build privacy-preserving digital identity—it is whether we will choose to do so. Let's build it right.
The Decentralized Identity Foundation (DIF) is an engineering-driven organization focused on developing the foundational elements necessary to establish an open ecosystem for decentralized identity and ensure interoperability between all participants. Learn more at identity.foundation.
The OpenID FAPI working group recommends the approval of Errata corrections to the following specification:
First Errata Set for JWT Secured Authorization Response Mode for OAuth 2.0 (JARM)An Errata version of a specification incorporates corrections identified after the Final Specification was published. This would be the first set of errata corrections for JWT Secured Authorization Response Mode for OAuth 2.0 (JARM). The corresponding previously approved specification is available at:
https://openid.net/specs/oauth-v2-jarm-final.htmlThis note starts the 45-day public review period for the specification draft in accordance with the OpenID Foundation IPR policies and procedures. Unless issues are identified during the review that the working group believes must be addressed by revising the draft, this review period will be followed by a seven-day voting period during which OpenID Foundation members will vote on whether to approve this draft as an OpenID Implementer’s Draft. For the convenience of members who have completed their reviews by then, voting will actually begin a week before the start of the official voting period.
The relevant dates are:
Errata public review period: Thursday, May 29, 2025 to Sunday, July 13, 2025 (45 days) Errata vote announcement: Monday, July 14, 2025 Errata early voting opens: Monday, July 21, 2025* Errata official voting period: Monday, July 28, 2025 to Monday, August 4, 2025 (7 days)** Note: Early voting before the start of the formal voting period will be allowed.
The OpenID FAPI working group page is https://openid.net/wg/fapi/.
Information on joining the OpenID Foundation can be found at https://openid.net/foundation/members/registration. If you’re not a current OpenID Foundation member, please consider joining to participate in the approval vote.
You can send feedback on the specification in a way that enables the working group to act upon it by (1) signing the Contribution Agreement at https://openid.net/intellectual-property/ to join the working group, (2) joining the working group mailing list at https://lists.openid.net/mailman/listinfo/openid-specs-fapi, and (3) sending your feedback to the list
Marie Jordan – OpenID Foundation Secretary
About The OpenID Foundation (OIDF)
The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.
The post Public Review of JWT Secured Authorization Response Mode for OAuth 2.0 (JARM) first appeared on OpenID Foundation.
The OpenID FAPI working group recommends the approval of the following specification as an OpenID Final Specification:
FAPI 2.0 Message Signing
A Final Specification provides intellectual property protections to implementers of the specification and is not subject to further revision. This note starts the 60-day public review period for the specification draft in accordance with the OpenID Foundation IPR policies and procedures. Unless issues are identified during the review that the working group believes must be addressed by revising the draft, this review period will be followed by a seven-day voting period during which OpenID Foundation members will vote on whether to approve this draft as an OpenID Final Specification. For the convenience of members, voting will actually begin a week before the review period ends, for members who have completed their reviews by then.
The relevant dates are:
Final Specification public review period: Thursday, May 29, 2025 to Monday, July 28, 2025 (60 days) Final Specification vote announcement: Tuesday, Jul 29, 2025 Final Specification early voting opens: Tuesday, August 5, 2025 Final Specification official voting period: Tuesday, August 12, 2025 to Tuesday, August 19, 2025 (7 days)** Note: Early voting before the start of the formal voting will be allowed.
The OpenID FAPI working group page is https://openid.net/wg/fapi/.
Information on joining the OpenID Foundation can be found at https://openid.net/foundation/members/registration. If you’re not a current OpenID Foundation member, please consider joining to participate in the approval vote.
You can send feedback on the specification in a way that enables the working group to act upon it by (1) signing the Contribution Agreement at https://openid.net/intellectual-property/ to join the working group, (2) joining the working group mailing list at https://lists.openid.net/mailman/listinfo/openid-specs-fapi, and (3) sending your feedback to the list.
Marie Jordan – OpenID Foundation Secretary
About The OpenID Foundation (OIDF)
The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.
The post Public Review Period for Proposed FAPI 2.0 Message Signing Final Specification first appeared on OpenID Foundation.
Digital identity security is entering a new era as the OpenID Connect Core 1.0 specification has been formally adopted, by incorporation, by the International Telecommunication Union (ITU) as Recommendation X.1285. This milestone represents the first time an OpenID Foundation (OIDF) specification has been recognized as an ITU-T standard, and it offers implementers even more confidence in the stability of digital IDs. Publication by the ITU will follow.
The OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling ‘networks of networks’ to interoperate globally.
This new Recommendation has been driven by the collaborative work of ITU-T Study Group 17 on standards-based authentication and identity frameworks that support secure, interoperable, and privacy-preserving digital ecosystems.
This recommendation by ITU-T places OpenID Connect Core 1.0 alongside other internationally significant identity specifications, such as the OSIA family of APIs (X.1281), which were recognized by ITU-T in March 2024, and the FIDO2 family of protocols, which were similarly recognized by ITU-T since the March 2018 ITU-T SG17 meeting.
Celebrating a team achievementSecuring ITU-T approval for an OpenID Foundation specification was no small feat. Over the past two years, ITU-T leaders, the OpenID Connect Working Group, and national delegations have reviewed the OpenID Connect Core 1.0 specifications and built the consensus needed for formal ITU-T Recommendation.
A heartfelt ‘thank you’ goes to:
Arnaud Taddei, Chair of ITU-T Study Group 17 (SG17) (Security) Abbie Barbir, Co-Rapporteur for ITU-T SG17 Question 10 (Q10/17) (Identity Management) Hiroshi Takechi, Co-Rapporteur for ITU-T SG17 Question 10 (Q10/17) (Identity Management) Debora Comparin, Chair of ITU-T SG17 Working Party 1 (WP1/17) Bjorn Hjelm, ITU Liaison and Lead Editor, ITU-T versions of OpenID Connect specifications Stephanie de Labriolle, SIA for SIDI Hub Q10 Liaison Manager Hiroshi Ota, ITU Secretariat for SG12 & SG15 Xiaoya Yang, Counsellor of ITU-T Study Group 17 – Security Oscar Giovanni León Suárez, Project Manager, Social Investment Solutions Mike Jones, Co-chair A/B Connect WG, and OIDF Board Member John Bradley, Co-chair A/B Connect WG, and OIDF Board Member Nat Sakimura, Co-chair A/B Connect WG, and Chairman, OpenID FoundationPlus sincere thanks overall to ITU-T SG17 members who found a consensus to consent this text by incorporation, according to the Accelerated Approval Process (AAP), at the closing plenary of its last meeting in April 2025.
“Reaching X.1285 is a landmark success for our community. By incorporating OpenID Connect Core with ITU-T standards, we’re ensuring interoperable, secure identity on a truly global scale,”
—Debora Comparin, ITU-T WP1/17 Chair
“This recognition validates years of work and trust-building between the OpenID Foundation and ITU. It also sets a clear path for other OpenID Foundation specifications to follow, and establishes a solid cornerstone both for the industry and security itself as well as for all member states to adopt and leverage in their overall regulations with urgency on digital wallet, digital public infrastructure, telecommunication/ICTs. ”
—Arnaud Taddei, Chair, ITU-T SG17 / Global Security Strategist, Enterprise Security Group
“This is a milestone for ITU-T SG17 Question 10. We look forward to continued collaboration with OpenID Foundation in all areas including all decentralized Identity specifications. Together we can make these protocols globally accepted and interoperable”
–Abbie Barbir, Co-Rapporteur for ITU-T SG17 Question 10 (Identity Management)
The recognition of OpenID Connect Core by ITU reflects increasing alignment across standards organizations to support international interoperability. These ITU-T standards now serve as critical enablers of global trust frameworks, often being referenced in national policies and regulatory guidance.
A vision realized – thanks to Nat SakimuraThe idea to pursue formal ITU recognition for an OIDF spec originated with Nat Sakimura, a long-time advocate for global standardization. Nat’s early vision, and tireless coordination with national administrations, sparked the initiative:
“We are grateful to the ITU community for this first formal recognition of an OpenID Foundation standard as now an ITU Recommended standard. This recognition is a testament to the existing value of the OpenID Connect Core standard to the global community, and the potential to reach an even wider community with the ITU’s support.”
—Nat Sakimura, Chairman OpenID Foundation
This ITU milestone comes on the heels of another major achievement. In December 2023, the OpenID Foundation submitted a suite of OpenID Connect specifications to the ISO as Publicly Available Specifications (PAS). Following the ISO approval vote, the nine documents were published in 2024.
Before submission, the OpenID Connect Working Group meticulously applied all known errata to the specifications to ensure the ISO versions reflected the most up-to-date, error-free text. Publication of these ISO/IEC standards has already begun to foster broader adoption of OpenID Connect, particularly in jurisdictions requiring compliance with internationally recognized standards bodies.
Looking ahead – more standards on the horizonWith our first ITU and ISO/IEC successes secured, the OpenID Foundation will submit the remaining suite of OpenID Connect specifications, followed by the next wave of formal international OpenID Foundations standards. They include:
FAPI 1.0 – ISO PAS submission planned mid-2025, ITU recognition targeted for 2H 2025 OpenID for Identity Assurance – ISO PAS submission planned mid-2025, ITU recognition targeted for 2H 2025 FAPI 2.0 – targeting ISO PAS submission mid-2025, ITU recognition targeted for 2H 2025Thank you to everyone who contributed to this journey with ITU, from editors and reviewers to national delegates and community volunteers. Together, we’re strengthening the security, interoperability, and global reach of digital identity. The road ahead is bright. We look forward to more OpenID Foundation specifications achieving formal recognition and empowering secure, seamless user experiences worldwide.
About the OpenID FoundationThe OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.
About ITU-T SG 17ITU-T Study Group 17 (SG17) is the International Telecommunication Union’s (ITU-T) primary body for security standardization. It is the lead study group on Identity Management. It focuses on developing standards to address security aspects of telecommunications, ICTs, and related applications. SG17 works on a wide range of security-related topics, including cybersecurity, security management, and identity management.
The post A new ITU-T standard ushers in a new era for OpenID first appeared on OpenID Foundation.
On May 5, 2025, the OpenID Foundation convened a landmark interoperability demonstration of digital identity credentials that brought together leading identity platforms, standards bodies, and government agencies from three regions. This is the first public demonstration of how a user can present their digital identity across digital platforms, devices, and credential formats in a way that is secure, private, interoperable, and globally scalable.
This interoperability event had four objectives:
Prove out new specifications and open source tests by demonstrating their interoperability in online transactions Provide results and implementer insights back to standards body peers Brief government partners on standards progress Support global-scale adoption Interoperability TestingThe event featured pair-wise and multi-wallet remote testing. Out of 224 possible wallet/verifier pairings, 153 tests were conducted, with over 90% passing successfully: this demonstrated the stability of the underlying specifications.
The specifications tested included:
The OpenID Foundation’s OpenID for Verifiable Presentations (VP) draft 24 The OpenID Foundation’s OpenID for Verifiable Credentials High Assurance Interoperability Profile Implementer’s Draft 2.0 W3C’s Digital Credentials API (DC API) FIDO’s CTAP specification ISO/IEC IS 18013-5:2021 Mobile Driving License (mDL) ISO/IEC TS 18013-7:2024 Mobile Driving License IETF’s SD-JWT draft 17 https://datatracker.ietf.org/doc/draft-ietf-oauth-selective-disclosure-jwt/ OAuth 2.0 OpenID Connect Core 1.0 WebAuthN Pairwise TestingThe participants executed four requests from the user’s digital identity credential stored on their device. These query scenarios use the new Digital Credentials Query Language (DCQL). These ranged from simple name retrieval in a mobile driving license (MDL) to more complex ‘over-18’ age proofs in both mobile driving license (mDL/mdoc) and Selective-Disclosure for JWT (SD-JWT) formats. Under the High-Assurance Interoperability Profile (HAIP), verifiers required signed requests, encrypted responses, and client-ID restrictions. These tests successfully demonstrated that enhanced privacy and security need not compromise developer productivity.
Against this backdrop, the OpenID Foundation assembled more than a dozen implementers (8 wallets and 7 verifiers) prepared to test interoperability with each other. They included:
1Password Android Animo/ Funke Wallet Bundesdruckerei MATTR Microsoft OpenID Foundation (open source tests) Panasonic Scytales Spruce An anonymous corporate Multi-Wallet Remote Interoperability TestingMicrosoft supported remote interoperability testing for the event with an architecture tailored for the NIST NCCoE mDL Project. Their contribution aimed at enabling interoperability across multiple standards and technical specifications to ensure high success rates in digital identity verification for Mobile Driver’s Licenses (mDL) and Persistent Identifiers (PIDs) across multiple wallets, using a single verifier provisioned by Mattr. The event achieved an overall success rate of 91.75%, measured across the following key elements:
Orchestration Layer: A Microsoft Entra tenant to coordinate interactions between multiple wallets to a single verifier (Mattr) Remote Presentation Architecture: Wallet-independent support for same-device and cross-device flows for multiple profiles and datasets Wallets Tested: 1 Password Animo: Funke Bundesdruckerei: InnoWallet Google: Multipaz Google: CM Wallet Scytales: Scytales Wallet Spruce ID: ShowcaseAt the end of the event, VIP observers bore witness to the demonstrations, assessing both the technical successes and the implications for their respective jurisdictional roadmaps. VIP observers participating in the live event were Ryan Galluzzo from the United States NIST, Mr. Soshi Hamagushi from the Japan Digital Agency, and Ajay Gupta from the California DMV. A big thanks to all of our VIP observers and our interop participation teams:
Why This Demonstration MatteredOver the past 18 months, verifiable credentials have moved from promising prototypes to production pilots in finance, healthcare, transportation, and government services. In the California DMV and OIDF hackathons in October and November 2024, we saw a wide range of use cases demonstrated, and in Europe, we have seen a wide range of use cases realized as part of the European Digital Identity Wallet’s Large Scale Pilots. More recently, the UK.Gov, the Swiss Confederation, and Japan Digital Agency declared their selection of OpenID for Verifiable Presentation in their digital identity projects. Other jurisdictions are poised to follow suit. This interop event on May 5th served as the crucible to demonstrate the real-world interoperability of specifications that are on their way to final, in line with European Digital Identity Wallet and NIST NCCoE Mobile Driver’s License (mDL) project timelines. To get to this point, there were three key components:
Liaisons and Partnerships: The Foundation has long-term liaisons and partnerships with peer standards bodies such as W3C, ISO, IETF, and the FIDO Alliance. These liaisons and ongoing technical conversations have underpinned the development of interoperable standards demonstrated on May 5th, alongside the large number of leading-edge implementers ready to prove out the specifications together. The results of this interoperability event will be shared via a liaison statement with ISO/IEC 18013-7 WG10 to inform this Work Group’s due diligence on online presentation specifications. Conformance Testing: OIDF developed open-source tests aligned to the specifications for use before and during the interop event. The implementers could prove their implementations against the tests before testing against their peers and provide critical feedback on the tests for the benefit of future implementers. Once the specs and open-source tests are finalized, implementers will be able to self-certify their implementations. Self-certification and other conformance requirements are often vital components of ecosystem governance to ensure all participants in an ecosystem have met the same high bar for security and interoperability. Hybrid Interop Events: Finally, this event was hosted in a hybrid format, with some implementers together in Berlin and others remotely located in Tokyo, San Francisco, and beyond. The interop event proved that online presentation using OpenID for Verifiable Presentation works across platforms, across devices, and for different credential types.Gail Hodges concluded the session by framing the importance of this moment:
Implementer Feedback“The ability to issue a credential in one ecosystem, present it through any wallet, and verify it online in another jurisdiction, seamlessly and securely, is the future of digital identity. Today we proved that it’s not just a vision, but a reality.”
The feedback from the implementers on the benefits of the May 5th interop is positive and encouraging that we are at an inflection point:
Juliana Cafik, Principal Program Manager and Identity Standards Architect said,
“Microsoft is dedicated to leading the advancement of secure and privacy-preserving digital identity solutions. This event highlights our commitment to creating a seamless and interoperable digital identity ecosystem through collaboration with global stakeholders. By participating in the development of open standards, we are ensuring that digital identities are universally accessible and trusted, empowering everyone to engage confidently in the digital world.”
Marina Ioannou from Scytales described their interoperability experience as “smooth and engaging.” Scytales, in partnership with Netcompany-Intrasoft—part of the NETCOMPANY Group A/S—was awarded the contract by the European Commission to develop the European Digital Identity Wallet (EUDI Wallet). This wallet aims to deliver a universal, interoperable digital identity solution, enabling electronic signatures, document validation across sectors, and full transparency in data usage.
“Participating in interoperability events like this one is an opportunity to validate the latest protocol drafts and versions, test the robustness of our implementation strategy and highlight any need for greater clarity in specifications to ensure better alignment.
At Scytales, we believe it’s critical for the global decentralized digital identity community to prioritize and strengthen interoperability across regions and ecosystems. As adoption accelerates worldwide, it’s vital that solutions built in one region work seamlessly with those from another. Only then can we ensure secure, privacy-preserving, and consistent identity experiences for users everywhere.”
Similarly, 1Password praised the clarity of the documentation and the supportive developer community:
“1Password is excited to be involved as a wallet and support the adoption of OID4VC. Thanks to the excellent documentation and developer community around these initiatives, we quickly built an early example of verifiable credential support in 1Password for Android.”
Micha Kraus of Germany’s Bundesdruckerei GmbH, a German federal technology company, that prints German passports and identity credentials, observed:
“The new Digital Credentials (DC) API is a really critical building block for building wallets and for relying parties in multi-party, multi-device, multi-wallet scenarios. I’m looking forward to seeing the implementations of the DC API on different platforms. What has worked particularly well over the past month is that the editors and contributors were not afraid to re-evaluate some options, while removing other options in favour of simplicity, and to drive true interoperability. This was very much appreciated.
My wish for the global community is that more parties will join the ‘club’ and see how easy it is to implement this. We are already seeing some very good applications that are secure, user friendly, and privacy preserving.”
Dirk Balfanz, from Google and OpenID Foundation Board member, said:
“The way DCQL has simplified our development process has been a really positive change, it’s made implementation much easier. The results we’re seeing in the interoperability tests are giving the Android team confidence that we’re nearing the point where this can be rolled out at scale. Scaling up always brings a level of caution, as they want to ensure everything is thoroughly vetted before launch. But at this stage, they feel like we’re getting there.”
“We’re now in a position to confidently recommend this to our developers. Global scalability has always been a key goal of this project, and it’s exciting to see that becoming a reality.” You can read more about Android’s support of digital credentials here.
Wayne Chang, Founder and CEO of SpruceID said, “At SpruceID, we believe open standards are essential to building a digital identity ecosystem that is secure, private, and controlled by users. We are proud to demonstrate with this group that these technologies create secure interoperability for a user-centric model of digital identity.”
Timo Glastra from Animo shared their conclusions after the event in a LinkedIn blog post:
“The overall results were a major success for proving the specification maturity, reaching over a 90% success rate. Most of the wallets and verifiers were able to get all DCQL queries working with both SD-JWT VC and mDOC.” Timo continued, “We’re looking forward to continue the interoperability testing and achieve 100% success rate across all wallets and verifiers.”
Oliver Terbu from MATTR stated:
Voices from government and standards leadership“Over the past two interop events, we’ve seen clear signs of maturity across specifications like OID4VP and the Digital Credentials API, with near-perfect success rates. What’s encouraging is not just that things are working. We are also seeing convergence around practical implementation patterns using a common simplified query language that reduces complexity and code duplication, especially across different credential formats. As a next step, the issuance side is gaining traction. The DC API is beginning to demonstrate how a consistent, browser-integrated approach can support seamless and secure credential issuance, while building on the existing OID4VCI specification.”
From Japan’s Digital Agency, Soshi Hamaguchi highlighted a successful proof-of-concept integrating the country’s My Number Card for student enrollment, certification and transit ticketing.
“Though limited in scope, this demo showed how global specifications can combine with national identity schemes to streamline everyday services,” he reflected.
Torsten Lodderstedt, the specification editor and co-chair of the Digital Credentials Protocol Working Group, expressed gratitude for the live feedback from implementers:
Next Steps Toward Global Adoption“Validating these features in real-world scenarios is crucial as we head toward final publication,” he said, inviting new contributors to help the working group evolve the specification beyond its first edition.
To close the day, OIDF Executive Director, Gail Hodges, reminded attendees that the OpenID4VP specifications are on track for final publication around the end of June 2025, and are currently open for public comment. Partnerships with W3C, ISO, and FIDO and other standards body peers will continue – and further pilots are already underway, from opening a bank accounts in partnership with the NIST NCCoE Mobile Driving License (mDL) project to transit in Japan and collaborations with open source providers, like MOSIP, to bring standards into the wallet solutions of lower income governments.
“This interoperability demonstration is a pivotal milestone in a long journey,” Gail concluded. “Thank you to every implementer, observer and liaison partner for making it possible. Together, we are building a truly global digital identity ecosystem, one that empowers users, protects privacy and delivers real-world value.”
The lessons learned at this event will resonate across the next wave of deployments. By proving that diverse wallets and verifiers can interoperate at scale, the OpenID Foundation and its partners have set a new standard: open, secure, and universally compatible credentials that travel anywhere in the world.
You can watch the online demonstration of the next generation of digital identity credentials here.
About the OpenID FoundationThe OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.
The post A Global First: OpenID Foundation Demonstrates Real-World Interoperability of New Digital Identity Standards first appeared on OpenID Foundation.
In 1958, Oliver R. Smoot, a student at MIT, was famously used as a human measuring stick to measure the length of the Harvard Bridge. Smoot, measuring a mere 5 foot 7 inches (1.70 m), would lay down on the bridge as his associates noted his position. It took a fair bit of time to measure the bridge as Smoot had to be carried by his associates to each new position – but the effort was well worth it. Thus, a playful, unconventional unit of measure was born. Over the years, smoot is remembered as a symbol of creativity, collaboration, and the unique quality of grassroots development.
Mercari, the Japanese e-commerce company behind the Mercari marketplace, has surpassed 10 million registered users of passkeys for authentication.
Yubico, a provider of hardware authentication security keys, has announced the expanded availability of YubiKey as a Service to all countries in the European Union.
This builds upon the company’s existing reach in markets such as the UK, U.S., India, Japan, Singapore, Australia and Canada. In addition, Yubico has expanded the availability of YubiEnterprise Delivery across 117 new locations around the world.
This now brings the total to 199 locations (175 countries and 24 territories and it more than doubles existing delivery coverage of YubiKeys to both office and remote users in a fast nad turnkey way. “Enterprises today are facing evolving cyber threats like AI-driven phishing attacks,” said Jeff Wallace, senior vice president of product at Yubico.
Authentication software company Entersekt has launched a partnership with South Africa-based PayTech solution provider Stanchion.
The partnership is aimed at “enhancing payment integration capabilities and delivering cutting-edge solutions to financial institutions worldwide,” the companies said in a Wednesday (May 21) news release.
The collaboration combines Stanchion’s tools for “modernizing, transforming, and accelerating innovation within payment systems” with Entersekt’s 3-D Secure payment authentication solution, which provides transaction authentication across all three domains: the merchant acquirer domain, the card issuer domain and the interoperability domain.
Passwords have been used as the first line of defense in protecting one’s digital identity, but they are fast becoming obsolete due to rampant identity theft. There seems to be no value in passwords anymore due to the increase in breaches of security systems on different platforms. This calls for an easier method of suppressing theft.
It is equally important to recognize the rise of passkeys as they help a great deal in bolstering digital identity protection.
Beyond the immediate promise of the Shared Signals Framework in managing live sessions through CAEP events, an event-based approach offers a compelling path forward for addressing longer-term identity challenges. One such challenge is identity lifecycle management, or provisioning and deprovisioning.
Challenges of provisioningMany underestimate the challenges of provisioning; for those who have not considered it, it may appear relatively simple on the outside. Much like watching a talented busker juggling on the streets of…oh, really any international city…say London, for no real reason other than the (most recent) interop held in that fine city.
Watching someone juggle looks easy, and starts relatively simple. A couple of balls fly into the air and bounce between the hands of the performer. Back and forth go the objects. The transfer seems simple, easy. And provisioning can, indeed, be like that: two discrete systems with fairly similar schemas and use cases (like an HR syncing into a local employee directory). But keep watching the busker…things rapidly get more complicated. Now instead of two objects, there are six or eight. The balls are exchanged for sticks, which are then set on fire. Now things aren’t so simple or easy, are they? The real world of provisioning escalates in much the same way; as the number of systems proliferate, so do the interconnections that must be maintained. This soon becomes an intractable problem (even without setting it on fire).
In an effort to address the challenges of managing the lifecycle of an identity, the System for Cross-Domain Identity Management (SCIM) was created. While SCIM has seen some success in connecting identity repositories, the rise of event-based architectures points to the need for SCIM to move from transactional and bulk operations to an event-based and asynchronous approach that allows interconnected systems to share identity context and data in near real time.
The SCIM Profile for Security Event Tokens seeks to provide a pathway for SCIM in event-based architectures.
Playing off the standardsSCIM events employ existing standards to accomplish this goal, using the same format of Security Event Tokens as in CAEP and Risk Incident Sharing and Coordination (RISC) events. It’s important to note that while SCIM Events may use the Shared Signals Framework as a transport layer, it does not require it. The juggler may be tossing objects in the air or water, they might be throwing clubs, rings, or balls. It’s all entertaining. SCIM events can be transported via push/pull over HTTP, streaming technologies like Kafka or Kinesis, webhooks, or SSF. The specification is agnostic. That said, the popularity of the Shared Signals Framework may make it a preferred option.
Syncing vs Notification: A diverse approachWith the adoption of an event-based approach, this iteration of SCIM allows for more than just a flow of updated information; while some systems may need and be able to process a
stream of events, others may prefer to be only notified of changes. In practice, this allows them to request only the changes that they’re interested in on a secure back channel, saving effort and preserving privacy. Other systems may need to process asynchronously, dealing with this new information on their own, independent timeline. The standard allows for this customizable approach, making it flexible for a diverse set of architectures.
Benefits of real time SCIM eventsThe ability of SCIM Events to support real time provisioning is significant. Once adoption takes place and real-time updates can be shared within an enterprise system, zero standing privilege (and zero trust) becomes one step closer to a reality. Making changes in real time means that access is no longer permanent, it is provisioned as an identity requires it, and then removed once it is no longer needed.
SCIM events hold more promise than provisioning alone, by ingesting data from various sources in real time, a system finds its true security potential. An access policy that an organization wants to enforce needs current data and attributes. With input from SCIM events alongside CAEP and RISC as the sources of real-time information, the policy becomes as current as it can be.
Privacy from data minimization will greatly benefit with the adoption of SCIM events. Limiting data collection to only what is necessary is fundamental to privacy approaches. Minimal information SCIM events allows receivers to decide which attributes or resource lifecycle changes it can accept. Or, for a transmitter to restrict a receiving domain. Less data acquired translates to less risk for the organization.
Within a system, SCIM events can be used as a reaction to CAEP and RISC events. When these events relay that a session has been eliminated, the system can react to the underlying attributes data and to the accounts themselves to prevent future use.
Finally, an auditable record of everything that takes place within an organization is essential to prove compliance and risk detection within the enterprise. It is not enough to eliminate invalid sessions or adjust access and entitlements, all actions taken within a system must be recorded to ensure that it is “living by its own rules” and doing precisely what it claims to be doing. Thus, SCIM events enable not only provisioning in real time, but auditing and governance as well.
SCIM Events: A logical stepBy adopting SCIM events, we move closer to addressing the long-term problem of provisioning: fragmented data stores, proprietary interfaces, and a disjointed approach to identity – the historical equivalent to juggling fire. But we achieve more than that as we move into this new paradigm: we gain real-time context for enhanced policy-based decision making. We bolster privacy-enhancing approaches to data minimization. We create concrete responses to CAEP and RISC events. And we do it all with an audit record that proves that we are moving closer to our goal – a world in which access is not available except when it is necessary.
In short, SCIM Events moves provisioning from a task that only seems simple into one that actually is. (The authors are still working on the other intractable problem: keeping five flaming bowling pins in the air simultaneously.)
About the OpenID FoundationThe OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.
The post Juggling with fire made easier: Provisioning with SCIM first appeared on OpenID Foundation.
The official voting period will be between Monday June 9, 2025 and Monday, June 16, 2025 (12:00pm PT), once the 45 day review of the specification has been completed. For the convenience of members who have completed their reviews by then, early voting will begin on Monday, June 2, 2025.
The OpenID EAP working group page is https://openid.net/wg/eap/. If you’re not already an OpenID Foundation member, or if your membership has expired, please consider joining to participate in the approval vote. Information on joining the OpenID Foundation can be found at https://openid.net/foundation/members/registration.
The vote will be conducted at https://openid.net/foundation/members/polls/358.
Marie Jordan – OpenID Foundation Secretary
About the OpenID FoundationThe OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.
The post Notice of Vote for Proposed Final EAP ACR Values Specification first appeared on OpenID Foundation.
As a precursor to the LF Decentralized Trust Mentorship 2025 Project, “Blockchain-Based OAuth 2.0 Authorization in 5G Core Networks with Hyperledger Fabric,” this blog is the first in a series that explores the evolution of OAuth 2.0 authorization in 5G Core Networks, highlighting its current limitations and the potential of decentralized frameworks to enhance security. In this series, we will discuss how OAuth 2.0 operates in 5G, its risks, and how the integration of Hyperledger Fabric can overcome these challenges.
WAO is currently working with the Responsible Innovation Centre for Public Media Futures (RIC), which is hosted by the BBC. The project, which you can read about in our kick-off post, is focused on research and analysis to help the BBC create policies and content to help improve the AI Literacy skills of young people aged 14–19.
We’re now at the stage where we’ve reviewed academic articles and resources, scrutinised frameworks, and reviewed input from over 40 experts in the field. They are thanked in the acknowledgements section at the end of this post.
One of the things that has come up time and again is the need for an ethical basis for this kind of work. As a result, in this post we want to share the core values that inform the development of our (upcoming) gap analysis, framework, and recommendations.
Public Service Media ValuesPublic Service Media (PSM) organisations such as the BBC have a mission to “inform, educate, and entertain” the public. The Public Media Alliance lists seven PSM values underpinning organisations’ work as being:
Accountability: to the public who fund it and hold power to account Accessibility: to the breadth of a national population across multiple platforms Impartiality: in news and quality journalist and content that informs, educates, and entertains Independence: both in terms of ownership and editorial values Pluralism: PSM should exist as part of a diverse media landscape Reliability: especially during crises and emergencies and tackling disinformation Universalism: in their availability and representation of diversityThese values are helpful to frame core values for the development of AI Literacy in young people aged 14–19.
AI Literacy Core ValuesUsing the PSM values as a starting point, along with our input from experts and our desk research, we have identified the following core values. These are also summarised in the graphic at the top of this post.
1. Human Agency and EmpowermentAI Literacy should empower young people to make informed, independent choices about how, when, and whether to use AI. This means helping develop not just technical ability, but also confidence, curiosity, and a sense of agency in shaping technology, rather than being shaped by it (UNESCO, 2024a; Opened Culture, n.d.). Learners should be encouraged to question, critique, adapt, and even resist AI systems, supporting both individual and collective agency.
2. Equity, Diversity, and InclusionAll young people, regardless of background, ability, or circumstance should have meaningful access to AI Literacy education (Digital Promise, 2024; Good Things Foundation, 2024). Ensuring this in practice means addressing the digital divide, designing for accessibility, and valuing diverse perspectives and experiences. Resources and opportunities must be distributed fairly, with particular attention to those who are digitally disadvantaged or underrepresented.
3. Critical Thinking and Responsible UseYoung people should be equipped to think critically about AI, which means evaluating outputs, questioning claims, and understanding both the opportunities and risks presented by AI systems. In addition, young people should be encouraged to understand the importance of responsible use, including understanding bias, misinformation, and the ethical implications of AI in society (European Commission, 2022; Ng et al., 2021).
4. Upholding Human Rights and WellbeingUsing a rights-based approach — including privacy, freedom of expression, and the right to participate fully in society — helps young people understand their rights, navigate issues of consent and data privacy, and recognise the broader impacts of AI on wellbeing, safety, and social justice (OECD, 2022; UNESCO, 2024a).
5. Creativity, Participation, and Lifelong LearningAI should be presented as a tool for creativity, collaboration, and self-expression, not just as a subject to be learned for its own sake. PSM organisations should value and promote participatory approaches, encouraging young people to contribute to and shape the conversation about AI. This core value also recognises that AI Literacy is a lifelong process, requiring adaptability and a willingness to keep learning as technology evolves (UNESCO, 2024b).
Next StepsWe will be running a roundtable for invited experts and representatives of the BBC in early June to give feedback on the gap analysis and emerging framework. We will share a version of this after acting on their feedback.
If you are working in the area of AI Literacy and have comments on these values, please add them to this post, or get in touch: hello@weareopen.coop
AcknowledgementsThe following people have willingly given up their time to provide invaluable input to this project:
Jonathan Baggaley, Prof Maha Bali, Dr Helen Beetham, Dr Miles Berry, Prof. Oli Buckley, Prof. Geoff Cox, Dr Rob Farrow, Natalie Foos, Leon Furze, Ben Garside, Dr Daniel Gooch, Dr Brenna Clarke Gray, Dr Angela Gunder, Katie Heard, Prof. Wayne Holmes, Sarah Horrocks, Barry Joseph, Al Kingsley MBE, Dr Joe Lindley, Prof. Sonia Livingstone, Chris Loveday, Prof. Ewa Luger, Cliff Manning, Dr Konstantina Martzoukou, Prof. Julian McDougall, Prof. Gina Neff, Dr Nicola Pallitt, Rik Panganiban, Dr Gianfranco Polizzi, Dr Francine Ryan, Renate Samson, Anne-Marie Scott, Dr Cat Scutt MBE, Dr Sue Sentance, Vicki Shotbolt, Bill Thompson, Christian Turton, Dr Marc Watkins, Audrey Watters, Prof. Simeon Yates, Rebecca Yeager
References Digital Promise (2024). AI Literacy: A Framework to Understand, Evaluate, and Use Emerging Technology. https://doi.org/10.51388/20.500.12265/218 European Commission (2022) DigComp 2.2, The Digital Competence framework for citizens. Luxembourg: Publications Office of the European Union. https://doi.org/10.2760/115376. Good Things Foundation (2024) Developing AI Literacy With People Who Have Low Or No Digital Skills. Available at: https://www.goodthingsfoundation.org/policy-and-research/research-and-evidence/research-2024/ai-literacy Jia, X., Wang, Y., Lin, L., & Yang, X. (2025). Developing a Holistic AI Literacy Framework for Children. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (pp. 1–16). ACM. https://doi.org/10.1145/3727986 Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S. (2021). Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence, 2(100041), 100041. https://doi.org/10.1016/j.caeai.2021.100041 OECD (2022) OECD Framework for Classifying AI Systems. Paris: OECD Publishing. https://www.oecd.org/en/publications/oecd-framework-for-the-classification-of-ai-systems_cb6d9eca-en.html Opened Culture (n.d.) Dimensions of AI Literacies. Available at: https://openedculture.org/projects/dimensions-of-ai-literacies Open University (2025) A framework for the Learning and Teaching of Critical AI Literacy skills. Available at: https://www.open.ac.uk/blogs/learning-design/wp-content/uploads/2025/01/OU-Critical-AI-Literacy-framework-2025-external-sharing.pdf UNESCO (2024a) UNESCO AI competency framework for students. Available at: https://unesdoc.unesco.org/ark:/48223/pf0000391105 UNESCO (2024b) UNESCO AI Competency Framework for Teachers. Available at: https://unesdoc.unesco.org/ark:/48223/pf0000391104Core Values for AI Literacy was originally published in We Are Open Co-op on Medium, where people are continuing the conversation by highlighting and responding to this story.
ISL presented at ConPro 2025’s 9th Workshop on Technology and Consumer Protection. The conference was a perfect opportunity to showcase our presentation on Safetypedia: Crowdsourcing Mobile App Privacy and Safety Labels.
The post IEEE’s ConPro ’25: Safetypedia: Crowdsourcing Mobile App Privacy and Safety Labels appeared first on Internet Safety Labs.
Many in the DIF community are asking about the upcoming Global Digital Collaboration conference in Geneva. As the date is quickly arriving, we wanted to give a sneak preview of what's ahead.
Table of Contents:
About the GDC Conference The Agenda Learn more and participate About the GDC Conference Key Details When: July 1-2, 2025 Where: Centre International de Conférences Genève (CICG), Switzerland Cost: Free (registration required) Register: https://lu.ma/gc25 (registration is available through any co-organizing partner) What is the GDC?Global Digital Collaboration is a landmark gathering bringing together 30+ global organizations to advance digital identity, wallets, and credentials - hosted by the Swiss Confederation.
What makes this conference truly unique is that, from the beginning, it's been co-organized by the participating organizations, who have worked with their communities, and with each other, to form an agenda that will help advance the most critical topics in digital identity.
Rather than being driven by a single organization's vision, the GDC represents a collaborative effort where international organizations, standards bodies, open-source foundations, and industry consortia have jointly defined priorities and sessions that address the most pressing challenges in digital trust infrastructure. This multi-stakeholder approach ensures broader perspectives are represented and creates unprecedented opportunities for alignment across traditionally separate communities.
Why Attend? Unprecedented collaboration: This conference's collaborative nature bridges organizations that rarely coordinate at this scale. Connect: Connect with peers from government and private sectors to advance standards coordination, cross-border interoperability, and robust digital public infrastructure Network with experts: Engage directly with technical leaders, government officials, and industry pioneers shaping the future of digital trust Who Is Organizing?The current list of co-organizers can be seen in the header image, with more to be added later this week. As a brief preview, this includes:
International & Government Organizations
European Commission (DG-CNECT) International Telecommunication Union (ITU) United Nations Economic Commission for Europe (UNECE) World Health Organization (WHO)Standards Development Organizations & Open Source Foundations
Decentralized Identity Foundation (DIF) Eclipse Foundation European Telecommunications Standards Institute (ETSI) FIDO Alliance International Electrotechnical Commission (IEC) International Organization for Standardization (ISO) Linux Foundation Decentralized Trust (LFDT) OpenWallet Foundation (OWF) Trust Over IP (TOIP) World Wide Web Consortium (W3C)Industry Consortia
Cloud Signature Consortium (CSC) Digital Credentials Consortium (DCC) Global Legal Entity Identifier Foundation (GLEIF)Next, we'll look at the exciting conference agenda and highlight key sessions for the DIF community.
The AgendaThe conference is structured across two distinct days, each with a specific purpose. Day 1 features plenary sessions designed to provide comprehensive overviews of global initiatives and sector-specific developments in digital identity. This agenda is nearly finalized and a draft has been published.
Day 2 offers a more interactive format with parallel presentations, technical deep dives, workshops, and collaborative sessions. The preliminary Day 2 schedule will be published next week, but we can share an early preview of the key themes and sessions that should be of particular interest to the DIF community.
Day 1: Global Landscape & Sector Scan Morning sessions feature updates from government and industry stakeholders worldwide Afternoon sessions explore major use cases across sectors including travel, health, education, and finance Morning: Opening & Global Landscape Opening addresses by leaders from ITU, ISO, WHO, and more Regional updates from: European Commission Switzerland United States China/Singapore Japan India Korea Australia Global South Afternoon: Sector Updates 🚘 Driving Licenses 🧳 Travel Credentials ⚕️ Health Credentials 📚 Educational Credentials 📦 Trade 💸 Payments 🏢 Organizational Credentials 🪙 Digital Assets 🪪 Standards for ID and Wallets 🔏 Digital Signatures 🔑 Car Keys Day 2: Technical Deep Dives and Working SessionsDay 2 features parallel sessions where participants will be encouraged to follow their interests plus share their experience and expertise.
Parallel sessions across multiple tracks including:
Privacy & Security: Zero-knowledge proofs, unlinkability Industry and Organizational Focus: Industry 4.0, Digital Product Passports, European Business Wallet Implementation & Deployment: Real-world wallet applications Standards & Interoperability: Cross-border credential exchange Policy & Regulation: Governance frameworks Emerging Technology: Emerging needs around AI and digital identity Demo Hour: See wallet applications and more Learn More and Participate Get UpdatesThere will soon be a GDC web site to more easily access event information and schedule. For now, we recommend:
Follow Global Digital Collaboration on LinkedIn And of course, subscribe to the DIF blog for additional updates focused on the DIF community Ready to Register?You can also register through any co-organizer available at https://lu.ma/gc25
👉 DIF community members are encouraged to use DIF's dedicated registration link: https://lu.ma/gc25-dif
Tickets are free of charge and grant full access to the entire conference, regardless of the organization used during registration
Hotels & DiscountsThe upcoming GDC web site will be updated with the latest information. For now, feel free to use the discount codes on this google document.
Looking ForwardThe Global Digital Collaboration conference represents a unique opportunity for advancing digital identity solutions that can work across borders while putting users in control. DIF is committed to ensuring privacy and agency remain front and center in these conversations.
For those in the DIF community and beyond, this is an unparalleled opportunity to shape the future of digital identity in collaboration with global decision-makers and implementers.
How do you build a beverage brand from scratch and land in over a thousand stores?
For Aisha Chottani, it started with stress and a few homemade “potions”.
In this episode, Aisha, Founder and CEO of Moment, joins hosts Reid Jackson and Liz Sertl to talk through what really goes into launching and scaling a functional drink brand. From labeling boxes by hand to managing relationships with co-packers and navigating supply chain failures, Aisha shares the behind-the-scenes story most startup founders keep to themselves.
She also gets real about what went wrong, like barcode mix-ups and Amazon returns gone sideways, and how those lessons became systems that power Moment’s growth today.
In this episode, you’ll learn:
Why small brands need relationships more than volume
How early mistakes can turn into long-term wins
What to watch out for when scaling distribution and operations
Jump into the conversation:
(00:00) Introducing Next Level Supply Chain
(01:34) Building a global mindset from four continents
(03:07) From McKinsey burnout to homemade “potions”
(06:06) Barcode errors and the pain of early logistics
(08:21) Growing Moment to 1,000 stores and 30 DCs
(11:33) What small brands can leverage on
(14:06) Collaborating with Lululemon
(17:15) Why Moment leans into a subscription model
(20:39) Operational failures to learn from
(27:36) Aisha’s favorite technology
Connect with GS1 US:
Our website - www.gs1us.org
Connect with the guest:
Aisha Chottani on LinkedInCheck out Moment
Watch the full recording on YouTube.
Status: Verified by PresenterPlease note that ToIP used Google NotebookLM to generate the following content, which the presenter has verified.
Google NotebookLM Podcast
https://trustoverip.org/wp-content/uploads/EGWG-2025-05-15_-The-C2PA-Conformance-Program-Scott-Perry.wavHere is a detailed briefing document reviewing the main themes and most important ideas or facts from the provided source, generated by Google’s NotebookLM:
Briefing Document: Review of C2PA and its GovernanceDate: May 15, 2024
Source: Excerpts from “GMT20250515-145218_Recording_2560x1440.mp4”
Presenter: Scott Perry, Co-chair of Trust over IP’s Foundations Steering Committee, Founder and CEO of the Digital Governance Institute, Co-chair of the Creator Assertions Working Group at the Decentralized Identity Foundation (DIF).
Topic: C2PA (Coalition for Content Provenance and Authenticity) and the Application of Trust over IP’s Governance Metamodel.
This briefing summarizes a presentation by Scott Perry on the Coalition for Content Provenance and Authenticity (C2PA) and the application of the Trust over IP (ToIP) governance metamodel to its conformance program. The C2PA is an industry-wide initiative creating a technical standard to attach “truth signals” or provenance information to digital objects. Facing a critical need to operationalize and govern this specification to ensure market trust and adoption, the C2PA has adopted the ToIP governance metamodel. This framework provides the necessary structure to establish a conformance program, define roles and responsibilities, manage risks, and create trust lists for compliant products and certification authorities. The program is set to officially launch on June 4th, initially focusing on self-assertion for conformance and introducing two levels of implementation assurance, with plans for independent attestation and higher assurance levels in the future.
2. Key Themes and Ideas The Problem of Trust in Digital Objects: The presentation highlights the growing challenge of establishing trust and authenticity for digital content in a world of easily manipulated or AI-generated media. This is particularly relevant for industries like telecommunications struggling with identity and verification, as noted by a participant’s observation about OTPs and SMS verification. C2PA as a Standard for Provenance and Authenticity: The C2PA specification aims to provide a technical solution by creating a “content credential” or manifest that is cryptographically bound to a digital object. This manifest acts as a ledger of actions taken on the object, providing a history and “nutrition label” of its source and modifications. “basically, it’s all of the major tech companies except Apple… coming together to create a standard for provenence, authenticity, truth signals on digital objects that can be digitally attached to digital objects.” Content Credential (Manifest): This is the core mechanism of the C2PA. It is a digitally attached ledger of actions taken on a digital object, such as “Camera took picture,” “edited picture,” or “an AI took this picture.” This manifest is “bound to it and linked to it” in a “cryptographically binding format,” providing tamper evidence. Scope of C2PA Responsibility: The C2PA primarily focuses on “created assertions,” which are “product-driven,” documenting actions taken within a product (e.g., a camera generating a picture, Photoshop editing an image). Distinction from “Gathered Assertions”: The C2PA does not take responsibility for “gathered assertions,” which are claims made by individuals or organizations outside of a product (e.g., “I Scott Perry took the picture” or industry-specific identifiers). These are the purview of other groups like CAWG (Content Authenticity Working Group) and related efforts like the Creator Assertions working group at DIF. Binding Mechanism: The C2PA uses X.509 certificates to bind the generator product to the digital asset. “when a picture is taken, the X509 certificate will be used will be binding it will be used to bind it bind the product to the asset.” This requires camera manufacturers and other product vendors to obtain certificates from approved Certification Authorities (CAs). The Need for Governance: While the C2PA created a technical specification, they recognized the critical need for a governance framework to operationalize and control the standard’s implementation and use in the market. “the key aspect is you have a spec out but you can’t control the use of the specification… they couldn’t get, you know, their arms around, you know, the on controlling its the specification use.” Application of ToIP Governance Metamodel: Scott Perry highlights how the ToIP governance metamodel provided the necessary structure for the C2PA to build its conformance program. “I came in with my toolkit from the the trust over IP project and it worked beautifully. It just created the structure to allow them to make the right decisions for themselves.” Key Components of the Governance Program (based on ToIP):Risk Assessment: Started with a “threats and harms task force” to identify major risks, particularly around the tampering of evidence and manifests. Governance Requirements and Framework: Defined primary documents (specification, security requirements, legal agreements) and control documents (privacy, inclusion, equitability requirements). A key output is a glossary of terms for the new ecosystem. Governance Roles and Processes: Identified key roles: the Governing Authority (C2PA Steering Committee), the Administering Party (Conformance Task Force), and Governed Parties (CAs, Generator Product companies, Validator Product companies). Legal Agreements: Formal agreements are being established between the C2PA and governed parties outlining roles, responsibilities, conformance requirements, and dispute resolution mechanisms. Conformance Criteria and Assurance: Defined based on the C2PA specification and implementation security requirements. The program includes “four levels of of assurance around the implementation of products,” though initially rolling out with two levels. These levels are tied to “security objectives” and assessed against the “target of evaluation” (the product and its supporting infrastructure). Conformance Process: Involves an intake form, application review, assessment of infrastructure (initially self-assertion, moving towards independent attestation), legal agreement signing, and adding records to trust lists. Residual Risk Assessment and Adaptation: The program includes a process to learn from the rollout, identify unmet requirements or issues, and adapt the program for continuous improvement. Trust Lists (Registries): Central to the program are trust lists identifying approved Generator Products, Validator Products, and Certification Authorities. A timestamp authority trust list is also being added. Levels of Assurance: The program is defining levels (initially rolling out two) to reflect different degrees of confidence in the implementation of the C2PA specification and associated security requirements. Achieving a higher level of assurance requires meeting all requirements for that level. Self-Assertion (Initial Rollout): Due to the complexity of auditing and getting the program launched quickly, the initial phase requires participants to self-assert that they meet the specification and requirements. Conformance Certificate: Upon successful conformance, products will receive a certificate tied to an OID (Object Identifier) denoting the assurance level they have achieved. This OID in the manifest’s certificate will identify the assurance level of the provenance information. JPEG Trust and Copyright: While C2PA provides provenance information that can be used for copyright, it doesn’t define ownership or copyright laws. JPEG Trust is mentioned as an organization creating an ISO standard focused on copyrights in concert with the C2PA standard. Relationship with W3C: The C2PA is actively engaged with the W3C, with discussions happening at the technical working group level regarding related standards like PROV (for provenance). Future Directions: Plans include introducing higher levels of assurance, implementing independent attested conformance, developing quality control software for assessing product compliance, and establishing a fee structure for the conformance program. CAWG (Content Authenticity Working Group) as a Broader Ecosystem: CAWG is viewed as a potentially larger ecosystem dealing with identity, metadata, endorsements, and AI learning process specifications, which will need to create their own applications and standards that can integrate with the C2PA foundation. 3. Important Ideas and Facts The C2PA is the Coalition for Content Provenance and Authenticity. It includes major tech and product manufacturers, excluding Apple initially but aiming to include them. The core technical output is the Content Credential (Manifest), a digitally attached ledger of actions on a digital object. The manifest provides tamper evidence and binds the product to the asset using X.509 certificates. C2PA focuses on “created assertions” (product-driven actions), leaving “gathered assertions” (individual/organizational claims) to other groups like CAWG. The Trust over IP governance metamodel has been successfully applied to structure the C2PA conformance program. The program addresses threats and harms related to tampering and requires adherence to implementation security requirements. The C2PA conformance program will officially launch on June 4th at the Content Authenticity Initiative symposium in New York City. The initial launch will include two levels of implementation assurance and a self-assertion confidence model. Key outputs of the governance program are legal agreements and trust lists of conforming products and certification authorities. The C2PA standard is becoming an ISO standard this year. Timestamp authorities will play a crucial role in providing trust signals related to the time of claim assertion. The program includes mediation and dispute resolution mechanisms in its legal agreements. The governance program provides the structure for the C2PA to “operationalize the spec” and control its use. 4. Key Quotes “basically, it’s all of the major tech companies except Apple… Coming together to create a standard for provenence, authenticity, truth signals on digital objects that can be digitally attached to digital objects.” “what it what it’s proposed to do is to create a ledger of actions against a digital object that is bound to it.” “It’s kind of the nutrition label on food… it’s really the nutrition label of all digital objects.” “The C2PA did not want to get involved in all of the the potential root, you know, actions and and variances about those types of things. They wanted to create the platform.” “They create the platform and they create the binding between the digital asset and the and the manifest using X509 certificates.” “The key aspect is you have a spec out but you can’t control the use of the specification… they couldn’t get, you know, their arms around, you know, the on controlling its the specification use.” “the governance program was needed to operationalize the spec. The spec was had, you know, a limitation in its usefulness without a governance program around it.” “I came in with my toolkit from the the trust over IP project and it worked beautifully. It just created the structure to allow them to make the right decisions for themselves.” “we’re creating a program which will hold generator and validator products accountable to the specific ification that’s already been published.” “We are creating two levels of implement implementation assurance and we are are using a self assertion confidence model we don’t have the mechanisms in place to hold organizations accountable for meeting the specification we don’t have an you know an assurance mechanism in place yet to do that.” “It is the hope that you know copyright laws can use the trust signals that are coming from the CTBA specification and conformance program in use for defining ownership and copyright.” “The conformance criteria is the spec and the spec is now at at level 2.2.” “we are looking at levels of assurance around the implementation of a product. Now it’s not just the product but it’s also its infrastructure.” “These are the kinds of records that were that are in the schema for the trust list.” 5. Next Steps Official launch of the C2PA conformance program on June 4th. Continued work on independent attestation and higher levels of assurance for the conformance program. Development of quality control software or processes for assessing product compliance. Ongoing collaboration with W3C and other relevant standards bodies. Further exploration of the broader CAWG ecosystem and its integration with C2PA.This briefing provides a foundational understanding of the C2PA, its technical specification, and the crucial role of the newly established governance program, structured using the Trust over IP metamodel, in driving its adoption and ensuring trust in the digital content landscape.
For more details, including the meeting transcript, please see our wiki 2025-05-15 Scott Perry & The C2PA Conformance Program – Home – Confluence
https://www.linkedin.com/in/scott-perry-1b7a254/ https://digitalgovernanceinstitute.com/The post EGWG 2025-05-15: The C2PA Conformance Program, Scott Perry appeared first on Trust Over IP.
NETOPIA Payments becomes the first online payment processor in the world to implement Click to Pay with Passkey FIDO (Fast Identity Online) – a modern online checkout solution built on EMV® global standards, designed to redefine the digital payment experience: faster, safer, and without manual card data entry.
Shane Weeden, IBM
An Ho, IBM
Abstract
Session hijacking is a growing initial attack vector for online fraud and account takeover. Because FIDO authentication reduces the effectiveness of other simpler forms of compromise, such as credential stuffing and phishing, cybercriminals turn to theft and re-use of bearer tokens. Bearer tokens are a form of credential which include session cookies used by browsers connecting to websites and OAuth access tokens used by other thick client application types such as native mobile applications. When these credentials are long-lived and can be “lifted and shifted” from the machine where they were created to be usable by a bad actor from another machine, their tradable value is significant. Emerging technologies such as Device Bound Session Credentials (DBSC) for browsers and Demonstrating Proof of Possession (DPoP) for OAuth applications seek to reduce the threat of session hijacking. This article describes how these technologies address the problem of session hijacking and how they complement strong phishing resistant authentication in online ecosystems.
Audience
This white paper is for chief information security officers (CISOs) and technical staff whose responsibility it is to protect the security and life cycle of online identity and access management from online fraud.
Download the White Paper 1. IntroductionAuthentication and authorization are integral parts of an identity lifecycle, especially for online credential ecosystems. The growing threat of online identity fraud with costly security incidents and breaches has enterprises looking for ways to protect and secure their workforces from account takeover through different attack vectors such as phishing, credential stuffing, and session hijacking. For authentication, FIDO authentication with passkeys provides users with “Safer, more secure, and faster online experiences”, and an increase in the adoption of passkeys has contributed to a reduction of the success of attack vectors of credential phishing, credential stuffing, and session hijacking accomplished via man-in-the-middle (MITM) phishing attacks. However, what happens after the authentication ceremony?
After authentication, browsers and application clients are typically issued other credentials. Enterprise applications generally fall into two primary categories: those that are web browser based and use session cookies for state management and those that are thick client applications using OAuth access tokens (this includes some browser-based single page applications and most native mobile applications). Both types of credentials (session cookies and access tokens) are considered, in their basic use, as “bearer” tokens. If you have the token (the session cookie or the access token), then you can continue to transact for the lifetime of that token as the user who authenticated and owned it.This whitepaper explores adjacent technologies that address the “lift and shift” attack vector for bearer tokens and how these technologies complement FIDO-based authentication mechanisms. In particular, this paper focuses on the proposed web standard Device Bound Session Credentials (DBSC) for protecting browser session cookies and OAuth 2.0 Demonstrating Proof of Possession (DPoP) for protecting OAuth grants.
2. Terminologysession hijacking: An exploitation of the web session control mechanism that is normally managed for a session cookie.
credential stuffing: An automated injection of stolen username and password pairs (credentials) into website login forms to fraudulently gain access to user accounts.
1. Passkeys – https://fidoalliance.org/passkeys/
2. Device Bound Session Credentials – https://github.com/w3c/webappsec-dbsc
3. OAuth 2.0 Demonstrating Proof of Possession (DPoP) – RFC9449: https://datatracker.ietf.org/doc/html/rfc9449
4. Session hijacking attack https://owasp.org/www-community/attacks/Session_hijacking_attack
5. Credential stuffing https://owasp.org/www-community/attacks/Credential_stuffing
access token: A credential used by a client-side application to invoke API calls on behalf of the user.
session cookie: A credential managed by browsers to maintain session state between a browser and a website.
bearer token: A token (in the context of this whitepaper may refer to either an access token or a session cookie) so called because whoever holds the token can use it to access resources. A bearer token on its own can be “lifted and shifted” for use on another computing device.
sender-constrained token: A token protected by a mechanism designed to minimize the risk that anything other than the client which established the token during an authentication process could use that token in subsequent requests for server-side resources.
Device Bound Session Credential (DBSC): A proposal for a W3C web standard defining a protocol and browser behavior to establish and maintain sender-constrained cookies. The mechanism uses proof of possession of an asymmetric cryptographic key to help mitigate session cookie hijacking.OAuth 2.0 Demonstrating Proof of Procession (DPoP): A mechanism for implementing sender-constrained access tokens that requires clients to demonstrate possession of an asymmetric cryptographic key when using the token.
OAuth 2.0 Demonstrating Proof of Procession (DPoP): A mechanism for implementing sender-constrained access tokens that requires clients to demonstrate possession of an asymmetric cryptographic key when using the token.
3. Adjacent/complementary technologies for a secure ecosystemWhile FIDO authentication technology can effectively eliminate phishing and credential stuffing attacks that occur during the login process, the addition of solutions to mitigate threats associated with bearer token theft is equally important. Bad actors whose attacks are thwarted during the login process will go after the next weakest link in the chain and try to steal post-authentication bearer tokens. This section explores two of these technologies for protecting bearer tokens: Device Bound Session Credentials (DBSC) protect browser-based session cookies and Demonstrating Proof of Possession (DPoP) protects OAuth grants. Alternative approaches to protect bearer tokens are also discussed.
Because no single piece of technology can protect against all threats, a combination of multiple techniques is required for adequate protection.
Table 1: Combination of technologies for increased security
TechnologiesAuthentication threatsPost-authentication threatsRemote PhishingCredential StuffingToken TheftPasskeysDBSC/DPoPPasskeys + DBSC/DPoP3.1 Browser session cookie security
Before discussing Device Bound Session Credentials (DBSC), you will need to understand the problem being addressed regarding browser session cookies. Session hijacking via cookie theft allows an attacker, who possesses stolen cookies, to bypass end-user authentication, including any strong or multi-factor authentication (MFA). This is particularly problematic when browsers create long-lived session cookies (which are a type of bearer token), since these cookies can be traded as alternatives to a user’s primary authentication credentials and then used from the attacker’s machine. This can lead to unauthorized access to sensitive data, financial loss, and damage to an organization’s reputation.
Attackers perform cookie theft through various methods such as man-in-the-middle phishing of a user’s existing MFA login process (when phishing-resistant authentication such as FIDO is not used), client-side malware, and occasionally through vulnerabilities in server-side infrastructure or software. Regardless of how cookie theft is perpetrated, when successful, these attacks are not only dangerous, but also hard to isolate and detect. Complementary technologies, such as Device Bound Session Credentials (DBSC), minimize the risks associated with browser cookie theft by making stolen cookies impractical to use from any machine other than the machine to which they were issued during authentication.
3.2 Device Bound Sessional Credentials – DBSC
DBSC refers to a proposed web standard currently in development within the Web Application Security working group of the W3C[2]. The goal of DBSC is to combat and disrupt the stolen web session cookies market. This is achieved by defining an HTTP messaging protocol and required browser and server behaviors to result in binding the use of application session cookies to the user’s computing device. DBSC uses an asymmetric key pair and in browser implementations the private keys should be unextractable by an attacker – for example stored within a Trusted Platform Module (TPM), secure element, or similar hardware-based cryptographic module.
At a high level, the API in conjunction with the user’s browser and secure key storage capabilities, allows for the following:
The server communicates to the browser a request to establish a new DBSC session. This includes a server-provided challenge. The browser generates an asymmetric key pair, then sends the public key along with the signed challenge to the server. This process is referred to as DBSC registration. Browser implementations of DBSC should use operating system APIs that facilitate secure, hardware-bound storage and use of the private key. The server binds the public key to the browser session by issuing a short-lived, refreshable auth_cookie which is then required to be transmitted in subsequent browser requests to the web server.As the auth_cookie regularly expires, a mechanism is required for the browser to refresh the auth_cookie asynchronously to primary application web traffic. The refresh process requires signing a new server-issued challenge with the same private key created during DBSC registration, thereby re-proving (regularly) that the client browser is still in possession of the same private key.
Limiting the lifetime of the auth_cookie to short periods of time (for example, a few minutes) disrupts the market for trading long-lived session cookies. An attacker can only use stolen session cookies (including the auth_cookie) for a brief period, and cannot perform a refresh operation, since the private key required to perform a refresh operation is not extractable from the client machine.
DBSC may be introduced into existing deployments with minimal changes to the application. This is important as DBSC could easily be incorporated as a web plugin module in existing server-side technology (for example, Apache module, Servlet Filter, or reverse proxy functionality). This permits enterprises to roll out deployment of DBSC in phases without a complete overhaul of all current infrastructure and companies can prioritize certain critical endpoints or resources first.
DBSC server-side implementations can also be written in a manner that permits semantics, for example: “If the browser supports DBSC, use it, otherwise fallback to regular session characteristics.” This allows users to gain the security advantages of DBSC when they use a browser that supports it without having to require all users to upgrade their browsers first.
Refer to the Device Bound Session Credentials explainer for more details on the DBSC protocol and standard, including a proposal for enterprise-specific extensions that adds attestation to DBSC keypairs.
3.2.1 What makes DBSC a technology complementary to FIDO?
The DBSC draft standard permits the login process to be closely integrated with the DBSC API. While FIDO is a mechanism that makes authentication safer and phishing resistant, DBSC is a mechanism that makes the bearer credential (session cookie) safer post-authentication. They complement each other by reducing the risk of account takeover and abuse, making the entire lifecycle of application sessions safer.
3.2.2 Alternative solutions
DBSC is not the first standard to propose binding session cookies to a client device. Token Binding is an alternative that combines IETF RFCs 8471, 8472, and 8473. Token Binding over HTTP is implemented via a Transport Layer Security (TLS) extension and uses cryptographic certificates to bind tokens to a TLS session. Token Binding has had limited browser adoption and is complex to implement as it requires changes at the application layer and in TLS security stacks. The Token Binding over HTTP standard has not been widely adopted and only one major browser currently offers support.
3.2.3 Advice
The DBSC standard relies on local device security and operating system APIs for storage and use of the private key that is bound to the browser’s session. While these private keys cannot be exported to another device, the key is available on the local system and may be exercisable by malware residing on the user’s device. Similarly, in-browser malware still has complete visibility into both regular session cookies and short-lived auth_cookies. DBSC is not a replacement for client-side malware protection, and the threat model for DBSC does not provide protections from persistent client-side malware. Ultimately, the user must trust the browser.
As browsers start to support DBSC over time, it will be important for servers to be able to work with a mix of browsers that do and do not include support for this technology. Some enterprises may dictate that corporate issued machines include browsers known to support DBSC, but many will not. It will be necessary for server-side implementations to take this into consideration, using DBSC when the browser responds to registration requests, and tolerating unbound session cookies when the browser does not. When building or choosing a commercial solution, ensure you consider this scenario, and include the ability to implement access control policies that strictly require DBSC in highly controlled or regulated environments or for specific applications.
At the time of writing, DBSC is in early evolution. It remains to be seen whether or not it will be widely adopted by browser vendors. The hope is that incubating and developing this standard via the W3C will result in wider adoption than previous proposals, similar to the way that the WebAuthn API has been adopted to bring passkey authentication to all major browser implementations.
4. OAuth grantsThe previous section introduced DBSC as a means to protect against session cookie theft in web browsers. Thick application clients, including mobile applications and single-page web applications, typically use stateless API calls leveraging OAuth grants instead of session cookies. An OAuth grant may be established in several ways, with the recommended pattern for thick clients being to initially use the system browser to authenticate a user, and grant access for an application to act on their behalf. Conceptually this is remarkably similar to browser-based sessions, including the ability and recommendation, to use FIDO authentication for end-user authentication when possible. At the conclusion of the browser-based authentication portion of this flow, control is returned to the thick client application or single-page web application where tokens are established for use in programmatic API calls.
The challenge that occurs from this point forward is almost identical to that described for browsers – the OAuth tokens are bearer tokens that if exposed to a bad actor can be used to call application APIs from a remote machine instead of from the legitimate application.
This section describes the use of DPoP, a technology for protecting the “lift and shift” of credentials used in OAuth-protected API calls which, just like DBSC, makes use of an asymmetric key pair and ongoing proof of possession of the private key.
4.1 Demonstrate Proof of Possession (DPoP)
OAuth 2.0 Demonstrating Proof of Possession (DPoP) is an extension of the existing OAuth 2.0 standard for implementing device bound (or sender-constrained) OAuth access and refresh tokens. It is an application-level mechanism that allows for the tokens associated with an OAuth grant (that is, refresh tokens and access tokens) to bind with the requested client using a public and private key pair. This requires the client to prove ownership of its private key to the authorization server when performing access token refresh operations and to resource servers when using access tokens to call APIs.
6. OAuth 2.0 for Native Apps https://datatracker.ietf.org/doc/html/rfc8252
High assurance OpenID specifications, such as Financial-grade API (FAPI 2.0), mandate the use of sender-constrained tokens and DPoP is the recommended method for implementing this requirement when Mutual TLS (mTLS) is not available.
At a high level, DPoP requires that:
The client generates a per-grant public/private key pair to be used for constructing DPoP proofs. Best practice implementations should use operating system APIs to ensure the private key is non-extractable. On initial grant establishment (for example, exchanging an OAuth authorization code for the grant’s first access token and refresh token), a DPoP proof (a JWT signed by the client’s private key that contains, among other things, a copy of the public key) is used to bind a public key to the grant. Requests to a resource server using an access token obtained in this manner must also include a DPoP proof header, continuously proving possession of the private key used during grant establishment. This is done for every API request. Resource servers are required to check if an access token is sender-constrained, confirm the public key, and validate the DPoP proof header on each API call. For public clients, subsequent refresh_token flows to the authorization server’s token endpoint must also contain a DPoP proof signed with the same key used during initial grant establishment. This is particularly important as the refresh tokens are often long-lived and are also a type of bearer token (that is, if you have it you can use it). The authorization server must enforce the use of a DPoP proof for these refresh token flows and ensure signature validation occurs via the same public key registered during initial grant establishment.Unlike a plain bearer access token which can be used by any holder, DPoP based access tokens are bound to the client that initially established the OAuth grant, since only that client can sign DPoP proofs with the private key. This approach minimizes the risks associated with malicious actors trading leaked access tokens.
Refer to DPoP RFC 9449 – OAuth 2.0 Demonstrating Proof of Possession (DPoP) for more information.
4.2 What makes DPoP a complementary technology to FIDO?
FIDO can be leveraged for phishing resistant end-user authentication during establishment of an OAuth grant. Refresh and access tokens obtained by a client following this authentication should be safeguarded against “lift and shift” attacks just like session cookies in browser-based apps. DPoP is a recommended solution for protecting these OAuth tokens from unauthorized post-authentication use. Together, FIDO for end user authentication and DPoP for binding OAuth tokens to a client device complement each other to improve the overall security posture for identities used in thick client applications.
4.2.1 DPoP alternative solutions?
RFC8705 – OAuth 2.0 Mutual-TLS Client Authentication and Certificate-Bound Access Tokens describes a mechanism that offers a transporter layer solution to bind access tokens to a client certificate. While it has been approved for use in FAPI 2.0 for open banking solutions, it is not particularly suitable for public clients such as native mobile applications.
RFC9421 – HTTP Message Signatures defines an application-level mechanism for signing portions of an HTTP message. Key establishment and sharing between the client and verifier are not defined by this specification, although this could be performed in a trust on first user manner during initial grant establishment in a similar manner to DPoP. There is no known public specification that maps the use of HTTP message signatures to the use case of sender-constrained bearer tokens in an OAuth client application. In the absence of such a public specification, widespread adoption for this use case is unlikely.
4.2.2 Advice
Sender-constrained tokens are a good idea, and, in some deployments, they are a regulatory requirement. For example, use of the FAPI profiles of OAuth is now mandated by many sovereign open banking initiatives. DPoP is a relatively simple way to achieve this requirement and is flexible enough to cover a wide range of application client types. That said, care must still be taken to adhere to the security considerations of DPoP. Pay close attention to section 11 of RFC9449, as well as apply other application security strategies for native or browser based single page applications as your scenario dictates. Remember that DPoP is focused solely on addressing the threats associated with token exfiltration, which include trading and use by malicious actors. It should be considered part of a defense-in-depth strategy for OAuth applications.
5. ConclusionThe intent of this paper is to inspire thinking around how different web security standards fit together and how those standards relate to the use of FIDO authentication for users. There are so many standards and standards bodies that it is often hard to understand which compete in the same space and which augment one another to form part of a comprehensive defense-in-depth strategy for identity fraud protection in online applications.
This paper tackled a specific, prevalent application security problem – the malicious trading and use of stolen cookies and access tokens. This paper also showed how technologies such as DBSC and DPoP mitigate the threats associated with token theft and how these technologies are complementary to FIDO authentication. Paired with FIDO, DBSC and DPoP provide greater overall identity fraud protection for your applications.
cross-posted on the Amnesty UK blog
Community-driven change is more important than ever. Whether we are advocating for social justice, environmental sustainability, or political reform, collective action is how we create lasting impact. But how do we build a movement that starts with individual curiosity and grows into sustained activism? That’s where the Amnesty International UK (AIUK) community platform project comes in — a digital hub designed to empower individuals, support collaboration, and drive meaningful change.
This blog post outlines how the platform and community strategy work together to guide people from discovery of the AIUK community to becoming activists within it.
Image remixed from an original by Visual Thinkery for WAO 1. DiscoveryThe journey of community-driven change starts with discovery. This is the stage where individuals first come into contact with AIUK. Maybe they learn about an issue, identify it as important, and begin to consider how they might want to get involved. Or maybe they meet someone at a demonstration, and discover the community first-hand.
AIUK social media, through broadcasting, is just one tool that helps people discover Amnesty International UK. AIUK makes complex issues accessible and relatable. We want to do the same as AIUK highlights grassroots efforts and community initiatives.
We want to encourage posts that show:
Our dedicated community and highlight key grassroots initiatives and campaigns. Signposts to find local groups or events based on interests. Digital actions, such as petitions or downloading campaign guides, to help users take their first steps.Such content ensures that even people who are new can find relevant AIUK communities and take the first steps toward engagement.
2. Intention to EngageOnce someone discovers a cause they care about, the next step is forming an intention to engage. This stage is all about commitment — moving from passive interest to active participation.
By showcasing community on the AIUK website, we both invite people in and celebrate what the community is achieving. We want to present clear pathways for involvement and help community members inspire others to take steps towards action.
We need to figure out processes that help:
Goal-setting: Encouraging community members to set personal milestones, like committing to attend 100 meetings. Sharing success: Telling success stories and finding testimonials that effectively attract new people while celebrating community achievements. Balancing information: Showcasing static information about past successes with dynamic, real-time updates on current campaigns from the community.By making it easy for people to express their intent and take small but meaningful steps, we build confidence and lay the groundwork for deeper engagement.
3. Taking Action: Turning Intent into ImpactWith intention comes action, and this is where real change begins. At this stage, people start to feel a sense of belonging and are ready to contribute to a cause they care about.
A knowledge base can help equip users with actionable tools. We’ll need clear resources and learning pathways that:
Guide people to the right information: Whether it’s organising a protest, writing letters to policymakers, or starting a local campaign, the knowledge hub can provide step-by-step guidance tailored to issues we work on. Help people collaborate: People should be able to connect with others who share their interests and work together on projects — whether virtually or in person. Best practices and community policies may also be at home in the knowledge hub. Show them into the community: Make sure that people feel supported and seen as they take action. Create an architecture of participation that brings them into the community platform.This stage is about turning isolated actions into collective power, with the support of the community ensuring that every contribution counts.
4. Sustaining Action: Building Lasting CommitmentSustained action is the key to creating lasting change. Too often, movements fizzle out after an initial burst of energy, but with a strong community strategy and integrated platform, we can keep momentum going.
To sustain engagement, the community platform needs to help people align with others in the AIUK movement. We need to think about:
Feedback loops: Regular check-ins with the community to understand their needs and ensure that we are adapting the community strategy and platform accordingly. A recognition ecosystem: Using digital badges and shoutouts for individuals or groups who demonstrate consistent commitment to help us make activism more visible. Storytelling opportunities: Sharing success stories and lessons learned will inspire others and keep motivation high.By encouraging a sense of belonging and purpose, we ensure that members find reasons to continue building collective power for human rights.
5. Becoming an Activist: Empowering Future LeadersThe final stage is becoming an activist. At this point, individuals understand that community isn’t one person, but rather all of us. They begin to work on behalf of others, coordinate together and lift people up with their leadership.
These leaders will use other coordination tools and processes and that’s great! We want to empower the development of activist and leadership skills. We’ll need:
Decentralised coordination best practices: For members who are ready to take on larger roles, such as leading groups or campaigns. Mentorship programs: Connecting experienced activists with newcomers to share knowledge and build networks. Advocacy training: Workshops, webinars, and resources focused on effective communication, policy advocacy, and community organising.Through these efforts, we can go beyond nurturing individual leaders to continue building a movement.
The Power of Community Work in Driving ChangeThe journey from discovery to becoming an activist is a process of gradual engagement and empowerment. There is a system of platforms, processes and content that help AIUK move people towards becoming an activist. Although we use various digital tools, the journey is an emotional and social one.
We are working hard to make sure the community platform project harnesses the collective strength of our community and makes a difference that lasts.
Building Power and Making Change was originally published in We Are Open Co-op on Medium, where people are continuing the conversation by highlighting and responding to this story.
First is Visa Intelligent Commerce, which will make intentcasting happen in a big way. It will also elevate the roles of Inrupt and the open-source Solid Project.
Second is making individuals first parties in their agreements with companies. It’s been the other way around ever since industry won the industrial revolution. Still, in the digital world, which runs on the Internet’s peer-to-peer protocols, we can fix that with an upcoming IEEE standard called IEEE P7012, aka MyTerms.
Third is the First Person Project, or FPP (website pending). With help on the buy side from Customer Commons and on the sell side by Ayra, we can finally replace “show your ID” with verifiable credentials presented on an as-needed basis by independent and self-sovereign individuals operating inside their own webs of trust.
Fourth is personal AI. This is AI that is as much yours as your shoes, your bike, and your PC. Personal, not personalized.
To explain how these will work together, start here:
Not long after The Intention Economy came out in May, 2012, Robert Thomson, Managing Editor of The Wall Street Journal, wanted the book’s opening chapter to serve as the cover essay for the Marketplace section of an upcoming issue. Harvard Business Review Press didn’t like that idea, so I wrote an original piece based on one idea in the book: that shoppers will soon be able to tell the market what they’re looking for, in safe, secure and anonymous ways—a kind of advertising in reverse that the book called “personal RFPs” and has since come to be called “intentcasting.” This became The Customer as a God: The image above was the whole cover of the Marketplace section on Monday, July 23, 2012. The essay opened with these prophetic words: “It’s a Saturday morning in 2022…”
It is now a Friday morning in 2025, and that godly future for customers is still not here. Yes, we have more market power than in 2012, but we are digital serfs whose powers are limited to those granted by Amazon, Apple, Facebook, Google, Microsoft, and other feudal overlords. This system is a free market only to the degree that you can choose your captor. This has led to—
The IONBA (Internet Of Notning But Accounts) is based on a premise: that the best customers are captive ones. In this relic of the industrial age, customers are captive to every entity that requires logins and passwords. Customers also have no ways of their own to globally control what data is collected about them, or how. Or to limit how that data is used. This is why our digital lives are infected by privacy-killing data-collection viruses living inside our computers, phones, TVs, and cars.
If you didn’t know about those last two, dig:
Consumer Reports says “All smart TVs—from Samsung, LG, you name it—collect personal data.” They also come with lame “privacy” controls, typically buried deep in a settings menu. (Good luck exhuming them. The ones in our TCL and Samsung TVs have all but disappeared.) Mozilla calls new cars “the Worst Product Category We Have Ever Reviewed for Privacy.” There is also nothing you can do to stop your car from reporting on everything your car does—and everything you do, including sexual ativity—to the carmaker, insurance companies, law enforcement, and who knows who else. This data goes out through your car’s cell phone, misleadingly called a telematics control unit. The antenna is hidden in the shark fin on your car’s roof or in an outside mirror.Businesses are also starting to lose faith in surveillance, for at least eight reasons:
People hate it. They also fight it. By 2015 ad blocking and tracking protection were the biggest boycott in world history. It tarnishes brands. Ad fraud is a gigantic problem, and built into the system. It commits Chrysoogocide (killing golden geese, most notably publishers). Bonus link. Regulatory pressure against it is getting bigger all the time. Advertisers are finally remembering that brands are made by ads aimed at populations, while personalized ads are just digital junk mail. Customers are using AI tools for guidance toward a final purchase, bypassing marketing schemes to bias purchasing decisions along the way. For more on that, see Tom Fishburne’s cartoon, and Bain’s report about it.So our four roads to The Intention Economy start with the final failings of the systems built to prevent it. Now let’s look at those roads.
Visa Intelligent Commerce
The press release is Find and Buy with AI: Visa Unveils New Era of Commerce. Less blah is Enabling AI agents to buy securely and seamlessly. Here’s the opening copy.
Imagine a future where an AI agent can shop and buy for you. AI commerce — commerce powered by an AI agent — is going to transform the way consumers around the world shop.
Introducing Visa Intelligent Commerce, an initiative that will empower AI agents to deliver personalized and secure shopping experiences for consumers – at scale.
From browsing and selection to purchase and post-purchase management, this program will equip AI agents to seamlessly manage key phases of the shopping process.
Visa CEO Ryan McInerney says a lot more in a 1:22 talk at Visa Product Drop 2025. The most relevant part starts about 26 minutes in, with a demo starting at about 31:30. Please watch it. Much of what you see there owes to Inrupt and Solid, which Sir Tim Berners-Lee says were inspired by The Intention Economy. For more about where Inrupt and Solid fit in Visa Intelligent Commerce, see Standards for Agentic Commerce: Visa’s Bold Move and What It Means: Visa’s investment in safe Intelligent Commerce points to a future of standards-forward personal AI, by John Bruce, Inrupt’s CEO. John briefed Joyce and me over Zoom the other day. Very encouraging, with lots to develop on and talk about.
More links:
A tweet appreciative of Inrupt by Visa’s @JackForestell Privacy for Agentic AI, by Bruce Schneier, Inrupt’s CISO (as well as the world’s leading security expert, and an old pal through Harvard’s Berkman Klein Center). Also from Bruce: What Magic Johnson and Bruce Schneier taught us at RSAC 2025 and RSAC 2025: The Pioneers of the Web Want to Give You Back Control of Your Data Visa announces AI Agent Payment APIs – and a pathway to Empowerment Tech, by Jamie Smith, who writes Customer Futures, the most VRooMy newsletter out there.Some news being made about Visa Intelligent Commerce:
Visa partners with AI giants to streamline online shopping Visa Gives AI Shopping Agents ‘Intelligent Commerce’ Superpowers Visa launches ‘Intelligent Commerce’ platform, letting AI agents swipe your card—safely, it says How major payment companies could soon let AI spend your money for you Visa, Mastercard offer support for AI agents Visa wants to give artificial intelligence ‘agents’ your credit card Visa adds ‘unknown concept’ where AI makes purchases for you – but shoppers suspect more ‘sinister purpose’ Visa Unveils Intelligent Commerce to Power AI-Driven PaymentsIEEE P7012 “MyTerms”
MyTerms, the most important standard in development today, will be a keystone service of Customer Commons, the nonprofit spinoff of ProjectVRM. It will do for contract what Creative Commons did for copyright: give individuals a new form of control. With MyTerms, agreements between customers and companies will be far more genuine mutual, and open to new forms of innovation not based on the kind of corporate control that typifies the IONBA. For example, it can open Visa Intelligent Commerce to conversations and relationships that go far past transaction. Take for example Market intelligence that flows both ways. While this has been thinkable for a decade or more (that last link is from 2016), it’s far more do-able when customers and companies have real relationships based on equal power and mutual interests. These are best framed up on agreements that start on the customer’s side, and give customers scale across all the companies with which they have genuine relationships.
First Person Project (FPP)
To me, FPP begins with the vision “Big Davy” Sallis came up with while he was working for VISA Europe in 2012, and read the The Intention Economy. At the time, he wanted Visa to make VRM a real category, but assumed that would take too long. So he decided to create a VRM startup called Qredo. Joyce and I consulted Qredo until Davy died (far too young) in 2015. Qredo went into a different business, but a draft I created for Qredo’s original website survives, and it outlines much of what the FPP will make possible. That effort is led by Drummond Reed, another friend and collaborator of Davy’s and a participant in ProjectVRM from the start. Drummond says the FPP is inspired by Why We Need First Person Technologies on the Net, a post published here in 2014. That post begins,
We need first person technologies for the same reason we need first person voices: because there are some things only a person can say and do.
Only a person can use the pronouns “I,” “me,” “my” and “mine.” Likewise, only a person can use tools such as screwdrivers, eyeglasses and pencils. Those things are all first person technologies. They were invented for individual persons to use.
We use first person technologies the same unique ways we use our voices.
Among other things, the First Person Project will fix how identity works on the Internet. With FPI—First Person Identity—interactions with relying parties (the ones wanting “your ID”) don’t need your drivers license, passport, birth certificate, credit card, or account information. You just give them what’s required, on an as-needed basis, in the form of verifiable credentials. The credentials you provide can verify that you are a citizen of a country, licensed to drive, have a ticket to a game, or whatever. In other words, they do what Kim Cameron outlined in his Laws of Identity: disclose minimum information for constrained uses (Law 2) to justifiable parties (Law 3) under your control and consent (Law 1). The credential you present is called a DID: a Decentralized Identifier. No account is required.
Trust in FPI also expands from individual to community. Here is how Phil Windley explains it in Establishing First Person Digital Trust:
When Alice and Bob met at IIW, they didn’t rely on a platform to create their connection. They didn’t upload keys to a server or wait for some central authority to vouch for them. They exchanged DIDs, authenticated each other directly, and established a secure, private communication channel.
That moment wasn’t just a technical handshake—it was a statement of first-person identity. Alice told Bob, “This is who I am, on my terms.” Bob responded in kind. And when they each issued a verifiable relationship credential, they gave that relationship form: a mutual, portable, cryptographically signed artifact of trust. This is the essence of first-person identity—not something granted by an institution, but something expressed and constructed in the context of relationships. It’s identity as narrative, not authority; as connection, not classification.
And because these credentials are issued peer-to-peer, scoped to real interactions, and managed by personal agents, they resist commodification and exploitation. They are not profile pages or social graphs owned by a company to be monetized. They are artifacts of human connection, held and controlled by the people who made them. In this world, Alice and Bob aren’t just users—they’re participants.
This also expands outward into community, and webs of trust. You get personal agency plus community agency.
The FPP covers a lot more ground than identity alone, but that’s where it starts. Also, Customer Commons is a funding source for the FPP, and I’m involved there as well.
Personal AI
Reza Rassool was also inspired by The Intention Economy when he started Kwaai.ai, a nonprofit community developing open-source personal AI. I now serve Kwaai as its volunteer Chief Intention Officer.
Let’s look at what personal AI will do for this woman:
Looks great, but we’re stuck in IONBA, she has little control over her personal data in all those spaces. For example,
She doesn’t have the digital version of what George Carlin called “a place for my stuff.” (Watch that video. It’s brilliant—and correct.) She has few records of where she’s been, who she’s been with and when—even though apps on her phone know that stuff and are keeping it inside the records of her giant overlords and/or selling it to parties unknown, with no way yet for getting it back for her own use. Her finances are possibly organized, but scattered between the folders she keeps for taxes, plus the ones that live with banks, brokers, and other entities she hardly thinks about. It would be mighty handy to have a place of her own where she could easily see all her obligations, recurring payments, subscriptions, and other stuff her counterparties would rather she not know completely. Her schedules are in Apple, Google, and/or Microsoft calendars, which are well app’d and searchable, but not integrated. She has no digital calendar that is independent and truly her own. Her business and personal relationship records are scattered across her contact apps, her Linkedin page, and piles of notes and business cards. She has no place or way of her own to manage all of them. Her health care records (at least here in the U.S.) are a total mess. Some of them ares inside the MyCharts and patient portals provided by separate (and mostly unconnected) health care specialists and medical systems. Some of it is in piles of printouts she has accumulated (if she’s kept them) from all the different providers she has seen. Some of it is in fitness and wellness apps, all with exclusive ways of dealing with users. None of it is in a unified and coherent form.So the challenge for personal AI is pulling all that data out of all her accounts, and putting it into forms that give her full agency, with the help of her personal AIs. Personalized AIs from giants can’t do that. We need our own personal AIs.
Four roads, one destination: a world where free customers prove more valuable than captive ones. Let’s make it happen.
The UK government has said it will roll out passkey technology across its digital services later in 2025, aiming to phase out SMS-based verification in favour of a more secure, user-friendly alternative.
Passkeys are unique digital credentials tied to a user’s personal device and offer a way to authenticate identity without the need for traditional passwords or one-time text codes.
Passkeys never leave the device and so cannot be reused across websites, which makes them resistant to phishing and other common attacks.
DIF members showcased their vision at this year's European Identity and Cloud Conference (EIC 2025), bringing together experts who are defining the future of human-centric digital identity. As AI capabilities accelerate, DIF members are tackling both the architectural foundations and philosophical implications of self-sovereign identity, and EIC provided an excellent forum to share how they are solving digital identity's most complex challenges.
The Philosophy Behind Standards: Values in Digital IdentityMarkus Sabadello, CEO of Danube Tech and DIF Steering Committee member, delivered a compelling talk examining the philosophical underpinnings of digital identity standards. His presentation, "The Worldviews behind Digital Identity Standards," argued that technical choices in standards like OID4VC, DIDComm, SD-JWT-VC, and the W3C verifiable credential data model reflect deeper philosophical trajectories variously aligned with European values like Liberty, Equality, and Fraternity.
Markus Sabadello presents "The Worldviews behind Digital Identity Standards"Sabadello illustrated how technologies like DIDComm prioritize fraternity through peer-to-peer connections, while JSON-LD enables innovation and liberty through permissionless semantic flexibility and self-publishing. As the industry standardizes wallets and verifiable credentials, he emphasized that these standards should be evaluated not only on technical merits but also on how they impact human values like sovereignty and equitable participation.
The talk concluded with an important reminder that technology is never value-neutral, highlighting the need to align digital identity standards with humanistic values while avoiding the pitfalls of fragmentation from competing, politically and commercially driven standards.
AI and Identity: A New FrontierAnother highlight of the conference was the Verifiable AI talk and panel series. In his talk "Private Personal AI and Verified Identity for AI Agents", Alastair Johnson (CEO of Nuggets) explored the challenges of implementing truly private personal AI that protects user sovereignty while creating verifiable identities for AI agents. Johnson explored how privacy-preserving technologies and self-sovereign identity frameworks can enable secure AI agent operations while maintaining individual control over personal data.
Alastair Johnson presents "Private Personal AI and Verified Identity for AI Agents"The subsequent panel, "Verifiable AI: The New Frontier" was moderated by Ankur Banerjee, CTO of Cheqd and DIF TSC Co-chair. The panel brought together Matthew Berzinski (Ping Identity), Sebastian Rodriguez (Advisor to Privado.ID), and Alastair Johnson to explore the intersection of AI and digital identity.
The panel addressed critical questions about how private personal AI agents can securely interact with identity systems, approaches to verifying AI agent identities, and frameworks for establishing trust in AI-human interactions.
As Ankur described in his following LinkedIn post, key takeaways included:
The need for both decentralized and centralized/hybrid approaches for different scenarios, including "AI employees" like the Devin software engineering assistant The challenge of allowing "good bots" into systems designed to keep malicious automation out The emerging consensus that AI agents will need their own wallets (or at least high-stakes delegation capabilities to and from wallets, or operate inside of wallets), and what kind of unique identifiers can power these interfaces The vulnerability of AI agents to bribing, threats, and "social" engineering attacks despite (or due to their primarily) rule-based constraints The agentic "Ship of Theseus" problem: at what point is an AI agent sufficiently changed that it invalidates prior attestations? The Personhood Challenge: Humans in a World of AIAnother significant focus at the conference was the development of personhood credentials as a defense against AI-generated deepfakes. Drummond Reed, Director of Trust Services at Gen Digital, presented "First-Person Credentials: A Case Study," discussing a collaborative effort between the Ayra Association, Customer Commons, Trust Over IP, and DIF to create a people-powered, privacy-preserving proof of personhood.
Personhood Credentials Why is proof of personhood so hot? Because it sits at the intersection of AI and decentralized identity. The threat of generative AI deep fakes has accelerated the search for a sustainable… KuppingerColeThis work built on a 2024 paper titled "Personhood Credentials," which proposed using a decentralized architecture based on verifiable credentials and zero-knowledge proofs. Reed's presentation covered design goals, trust models, user experience considerations, and go-to-market strategies for this emerging approach.
Personhood Credentials: From Theory to Practice
The subsequent panel, "Personhood Credentials: From Theory to Practice," brought together Ankur Banerjee, Drummond Reed, Steven McCown (Chief Architect of Anonyome Labs), and Sebastian Rodriguez to examine real-world implementations and practical challenges in creating personhood credentials. The panel explored how technologies like zero-knowledge proofs and selective disclosure can preserve individual privacy while meeting legitimate verification requirements.
PANEL: Personhood Credentials: From Theory to Practice This panel features experts examining real-world implementations, emerging standards, and practical challenges in digital identity. They will explore how technologies such as zero-knowledge proofs… KuppingerCole Technical Innovations in Identity InfrastructureThe conference also featured several technical presentations on practical implementations of verifiable credentials and digital identity wallets:
Richard Esplin, Head of Product at Dock, presented "Biometrics and Verifiable Credentials: Balancing Security and Privacy," addressing the challenges biometric providers face as regulations become stricter. Esplin shared best practices for integrating biometrics with verifiable credentials without undermining privacy and flexibility.
Biometrics and Verifiable Credentials: Balancing Security and Privacy [Intermediate] Biometric providers are facing new challenges as regulations governing biometric data become stricter and organizations try to extend their biometric enabled business processes across ecosystems… KuppingerColeDr. Paul Ashley, CTO of Anonyome Labs, discussed the implementation of Hardware Security Modules (HSMs) in digital identity wallets in his talk "Digital Identity Wallet Utilizing a Hardware Security Module." The presentation explored how digital identity wallets can be enhanced through HSM integration to fulfill the requirements of the EU Digital Identity Wallet framework, with analysis of each credential standard's compatibility with various HSMs' cryptographic capabilities.
Dr. Paul Ashley presenting "Digital Identity Wallet Utilizing a Hardware Security Module" Looking ForwardThe DIF community remains the leading forum for innovation in decentralized identity standards and implementations. The frameworks, protocols, and approaches discussed at the conference provide a clear architectural roadmap for solutions that protect individual autonomy while enabling secure, verified interactions between humans and AI systems. Through continued collaboration across our working groups, DIF remains committed to developing open standards that address both current and emerging identity challenges.
To learn more about these topics or to get involved with the Decentralized Identity Foundation's work, visit DIF's website.
ISL began its life as the Me2B Alliance, striving to create standards to enable greater power symmetry in the digitally facilitated relationship between consumers (“Me-s”) and the companies whose technology they use (“B-s”). We called this the M2B relationship. For mobile apps, all too often it’s a case of “Me2 Who Knows?!” People have a right to know who’s legally responsible for the apps they use, and it is anything but clear in mobile app stores today. App stores are failing to make clear the legal entity who is responsible for apps. ISL has filed responsible disclosures with Apple starting in late 2024 but our repeated attempts have been dismissed.
Anatomy of Responsible Party Info in the App StoresBoth Google and Apple allow for two kinds of developer accounts: individual and organization. The creation of either type of account requires identity validation, but it’s a lower bar for individuals than for organizations. Individuals must provide a government issued ID credential before being allowed to open a developer account. This validates the individual’s name and address. Organizations, however, must provide a DUNS number to validate the legal existence of the organization. 12
▶ Problem 1: How effective is this level of identification authentication? ISL recently found an app developer with 15 apps in the Google Play store with no verifiable legal existence whatsoever. Thus, the process is imperfect at best.
In both stores, the “Account Holder” (to use Apple’s language) is the individual/entity who is in a legal relationship with the app store [owner].
Figures 1a and 1b show two parts of an Apple App store listing. Note that the name in blue under the app name appears to be the Account Holder (Figure 1a). Note that the Information section of the app listing shows five other places where we expect to see the same Account Holder name and websites.
Figure 1a: Apple App Store Example – App Header
Figure 1b: Apple App Store Example – App Information
Figures 2a and 2b show a similar annotated view of the Google Play Store app listing. Between Figures 2a and 2b, there are six instances where the Account Holder name appears.
Figure 2a: Google Play Store Example – Part 1
Figure 2b: Google Play Store Example – Part 2
This all seems fine. What we see in practice, though, is that the various links and names presented in the app store listing that should be definitively showing the name of the legally party responsible for the app often have inconsistencies. Which brings us to additional problems.
▶ Problem 2: Account Holders can create additional user accounts within their account, including users with permissions to submit/delete apps.3 There’s seemingly no governance over this capability, left strictly in the hands of the Account Holder.
▶ Problem 3: The app store app listing doesn’t indicate if the developer of the app is an individual or a company. This information matters. People deserve to know if they’re using an app developed by an individual developer, or by a company. No matter what, so long as apps are collecting personal information, people have an unconditioned right to know who gets their data and what they’re doing with it.
▶ Problem 4: The Apple app store doesn’t disclose the location of the responsible app developer but the Google Play store does.4 The great thing about app ecosystems is that they foster worldwide participants. The problem is that the responsible developer can be oceans away from consumers, making it difficult or impossible to hold the developer accountable if there are issues.
▶ Problem 5: Apps have broken developer links. It’s wildly confusing when the name in blue or green font under the app name is different from the name that appears when you click on the developer link. Imagine if you went to a grocery store and there was a loaf of bread with no brand or company information. You wouldn’t want to eat that. When you click on the Developer Website link for the app shown in Figure 1b you find yourself not only not at a site that says Kepler47, you find a non-functional page for audiojoy.com (Figure 3).
Figure 3 Developer website URL for 12 Step AA NA Daily Meditation from the Apple App Store: https://audiojoy.com/cgi-sys/suspendedpage.cgi
▶ Problem 6: The Account Holder name from the listing header doesn’t match the name in the privacy policy OR in the App Support link. Figures 4a and 4b illustrate a case where the listed developer in the listing header is Will Aitchison (Figure 4a), but the privacy policy fails to indicate a legally responsible data controller entirely (Figure 4b).
Figure 4a: Account Holder name
Figure 4b: Privacy policy link for app by Will Aitchison: https://www.firststeporegon.org/docs/PrivacyPolicy_25-05-2018.pdf
Figure 4c: Privacy Policy from “Developer’s Website”
Note that there’s another layer of confusion for the First Step Oregon app, namely, the privacy policy found on the App Support page differs from the privacy policy linked in the app store (Figure 4c). This case is a case where the app developer was likely an individual affiliated with the organization who wrote and submitted the app on behalf of the company. Still, it leaves a question for users: who is responsible? Who does the user contact in the case of issues?
The Boggle: Arcade Edition app in the Apple store shows a similar situation. The Account Holder appears to be Zynga Inc. from the app store listing header (Figure 5a). But when you click on the App Support link you see the Take-Two Terms of Service (Figure 5.b). Similarly, the linked privacy policy is also Take-Two’s. Finally, this app includes a copyright showing Zynga Inc. in the information section (Figure 5c). In this instance, the original Account Holder (Zynga Inc.) was acquired by another company (Take-Two). Zynga appears to be a wholly owned subsidiary of Take-Two based on its California business registration status, but the “hybrid” information in the app store is confusing.
Figure 5a: Boggle App store listing header – Account Holder: Zynga Inc.
Figure 5b: Boggle App Support Link
Figure 5c: Boggle App store listing – Information Section
Interestingly, not all Zynga games in the app store show Take-Two info at the App Support link. Figure 6b shows the App Support link for FreeCell, another Zynga game.
Figure 6a: FreeCell App store listing header
Figure 6b: FreeCell App Support link
▶ Problem 7: App Information shows two different names. Figures 7a and 7b show elements of the Apple app store listing for the app, Count Money and Coins – Photo Touch Game. In the Information section of the app store listing, Innovative Investments Limited is shown as the Seller, but Grasshopper Apps is the copyright registrant.
Figure 7a: Count Money and Coins App store listing header
Figure 7b: Count Money and Coins app – Information section
▶ Problem 8: App store listings with broken privacy policy links. It’s relatively easy to find apps in the app stores whose privacy links are simply broken, non-functional. This is what we found with most of the Innovative Investments Limited apps (Figure 7c).
Figure 7c: http://www.grasshopperapps.com/privacy-policy
ConclusionsNONE of this should be happening today. App stores receive 30% of all app revenues and thus have ample resources to programmatically monitor these situations. Consumers should never have to conduct forensic research in order to figure out who they’re entering into a business relationship with. Here’s a recap of what the app store owners should do:
Make it crystal clear on your label who the legal entity responsible for the app is (I’ll call this “responsible developer”). Make sure ALL instances in and related to the app store listing consistently show the same responsible developer name. Make sure there is valid, working contact information for the responsible developer. Indicate if the developer is an individual or an organization.Here are Recommendations for app consumers:
If there isn’t a privacy policy, don’t install the app. If there are no privacy details provided in the Apple store, don’t install the app. If there’s no developer contact information provided, don’t install the app. Contact us if you find these or other problems with app store entries. Final ThoughtsWe are well past the point of understanding the risks of these things, yet we see no systemic changes under development on the part of Apple and Google to put safety measures in place. Perhaps shining this light on some of the issues can help spur action.
Footnotes: https://support.google.com/googleplay/android-developer/answer/13628312?sjid=9574226792909682372-NC https://developer.apple.com/programs/enroll/ Summary of roles and permissions for Apple developer accounts: https://developer.apple.com/help/account/access/roles/ Location of the developer was met with some warranted and some dubious pushback from Android developers as shown on this Reddit thread https://www.reddit.com/r/androiddev/comments/17w3pgz/google_started_displaying_full_legal_name_and/?rdt=50889 . Mandatory disclosure of an individual developer’s location presents some risks. That said, the developer is capable of getting every user’s location information so it seems a reasonable requirement.The post Me2B or Me2Who Knows: App Stores Fail to Provide Clear Legally Responsible Party appeared first on Internet Safety Labs.
We’re excited to introduce CATio Spaces, a new way for civil society organizations to connect with our team and talk about cybersecurity in a low pressure, friendly environment.
The post Introducing CATio Spaces: A Learning Space to Talk Cybersecurity appeared first on The Engine Room.
A decentralized repository for secure, scalable genomic data sharing & AI-driven personalized healthcare insights — powered by OriginTrail Decentralized Knowledge Graph (DKG).OriginTrail powers the future of ethical AI in healthcare with ELSA
We’re excited to announce that OriginTrail is joining forces with the ELSA (European Lighthouse on Secure and Safe AI) initiative to shape the future of decentralized, privacy-preserving artificial intelligence (AI) in healthcare. Digital healthcare today faces three pressing challenges: safeguarding patient privacy, bridging fragmented data silos for seamless interoperability, and meeting strict regulatory requirements without stifling innovation.
At the heart of this collaboration lies a DeReGenAI — a decentralized repository for secure, scalable genomic data sharing and AI-driven personalized healthcare, powered by the OriginTrail Decentralized Knowledge Graph (DKG). This initiative tackles the most pressing challenges in digital health: enabling secure, compliant, and user-sovereign sharing of sensitive genomic data while unlocking the full potential of AI-driven personalized healthcare.
Trustworthy AI needs trustworthy infrastructureAI is transforming healthcare — but for it to do so responsibly, it must be built on a foundation of trust, transparency, and ethics. That’s exactly what OriginTrail brings to the table within the ELSA consortium: an open-source, decentralized infrastructure that ensures data privacy, ownership, and interoperability at scale.
By integrating OriginTrail DKG, DeReGenAI becomes a decentralized repository that puts patients in control of their most personal asset — their genomic data. This enables:
User-managed permissions: Patients decide who can access their data, when, and for what purpose. Privacy-preserving monetization: Individuals can opt to share their data with research institutions or health providers on their own terms. AI-ready interoperability: Seamless interaction with AI systems while maintaining the integrity and provenance of the data.At its core, the OriginTrail DKG act as a knowledge graph of knowledge graphs — a globally distributed network where each participant maintains control over their own knowledge node. These nodes interact in a fully decentralized manner, eliminating the risks of centralized data silos and single points of failure.
Here’s why this matters:
Global scale: Access data from diverse sources without compromising security. Privacy-first architecture: Data sovereignty is seamlessly integrated into the infrastructure. Compliance-ready: Designed with GDPR and other regulatory frameworks in mind. Interoperable: Built for seamless integration with AI technologies and healthcare systems. How does DeReGenAI work?To power the next generation of personalized healthcare, DeReGenAI employs decentralized Retrieval-Augmented Generation (dRAG) — an evolution of how Large Language Models (LLMs) interact with external data.
Instead of querying a centralized source, the LLMs in DeReGenAI leverage the OriginTrail DKG to retrieve verified, decentralized knowledge. This unlocks:
More accurate AI insights, Context-aware healthcare recommendations, Trustworthy and verifiable AI behavior.The ELSA initiative brings together top-tier European academic, industrial, and technology partners, such as University of Oxford, The Alan Turing Institute, NVIDIA, and others, to build a future where AI is both effective and ethical. As part of the ELSA initiative, OriginTrail is used to build a trusted data ecosystem for the AI age — one where people, not platforms, control their data, and where innovation never comes at the cost of ethics.
We’re proud to be driving this change, and even prouder to be doing it alongside an incredible group of partners.
Learn how OriginTrail is powering the shift to human-centric AI at https://origintrail.io/.
Trust the source.
OriginTrail powers the future of ethical AI in healthcare with ELSA was originally published in OriginTrail on Medium, where people are continuing the conversation by highlighting and responding to this story.
The OpenID Connect Working Group recommends approval of the following specification as an OpenID Implementer’s Draft:
OpenID Connect Relying Party Metadata Choices 1.0This would be the first Implementer’s Draft of this specification.
An Implementer’s Draft is a stable version of a specification providing intellectual property protections to implementers of the specification. This note starts the 45-day public review period for the specification draft in accordance with the OpenID Foundation IPR policies and procedures. Unless issues are identified during the review that the working group believes must be addressed by revising the draft, this review period will be followed by a seven-day voting period during which OpenID Foundation members will vote on whether to approve this draft as an OpenID Implementer’s Draft. For the convenience of members who have completed their reviews by then, voting will actually begin a week before the start of the official voting period.
The relevant dates are:
Implementer’s Draft public review period: Tuesday, May 13, 2025 to Friday, June 27, 2025 (45 days) Implementer’s Draft vote announcement: Saturday, June 14, 2025 Implementer’s Draft early voting opens: Saturday, June 21, 2025* Implementer’s Draft official voting period: Saturday, June 28, 2025 to Saturday, July 5, 2025 (7 days)** Note: Early voting before the start of the formal voting period will be allowed.
The OpenID Connect working group page is https://openid.net/wg/connect/. Information on joining the OpenID Foundation can be found at https://openid.net/foundation/members/registration. If you’re not a current OpenID Foundation member, please consider joining to participate in the approval vote.
You can send feedback on the specification in a way that enables the working group to act upon it by (1) signing the contribution agreement at https://openid.net/intellectual-property/ to join the working group (please specify that you are joining the “AB/Connect” working group on your contribution agreement), (2) joining the working group mailing list at https://lists.openid.net/mailman/listinfo/openid-specs-ab, and (3) sending your feedback to the list.
Marie Jordan – OpenID Foundation Board Secretary
About The OpenID Foundation (OIDF)
The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.
The post Public Review Period for Proposed Implementer’s Draft of OpenID Connect Relying Party Metadata Choices first appeared on OpenID Foundation.
The OpenID Foundation’s Gail Hodges and Joseph Heenan presented a talk on “If, when, and why to implement the FDX ‘blue’ security profile with FAPI 2.0” on Tuesday April 22nd for the benefit of North American attendees at the Financial Data Exchange’s Spring Global Summit held at the Gaylord National Harbor. This talk is especially timely as members decide what RFCs to approve this month, including a proposed ‘blue’ profile that contains FAPI 2.0, the OpenID Foundation’s leading communications protocol for open banking and open data. Assuming the RFC comes into effect, implementers in North America will be considering this new FDX profile for their open banking deployments and compliance to local regulations in the U.S. and Canada.
FAPI specifications recommended by FDXFAPI has long been “recommended by FDX” but an important milestone came this spring when the FDX Security and Authorization Work Group unanimously recommended FAPI 2.0 as part of the “blue” profile in a new RFC proposal. The OpenID Foundation, as a new member of the Financial Data Exchange, applauded this encouraging milestone and hopes to see this RFC come into effect this month for the benefit of shared FDX and OpenID Foundation members and contributors.
The OpenID Foundation sees great potential for FAPI 2.0 to help US and Canadian implementers to meet their regulatory compliance obligations, and to do so in a way that delivers security, interoperability, and operational efficiency by default, leaving no entity (large or small) behind.
Other ecosystems in other parts of the world like Brazil, UAE, Saudi Arabia, Norway, and Australia are already benefiting from FAPI, and the prospect of North American implementers implementing FAPI 2.0 at scale looks promising through the FDX relationship.
As Executive Director Gail Hodges said, “The FDX expertise in data specifications perfectly complements the OpenID Foundation expertise in highly secure and interoperable communications profiles, and we value the ongoing collaboration with the Financial Data Exchange.”
Key messages shared with FDX members and stakeholdersHodges and Heenan made a series of key points in their talk on Tuesday, which included an audience of North American banks, fintechs and aggregators, as well as civil society and government representatives. The first was the summary of key benefits of FAPI 2.0:
The second was to illustrate the comprehensive nature of FAPI, which fully addresses and specifies security, authorization, authentication, interoperability, and conformance. This is a different approach to US and Canadian implementations that are using OAuth2.0 in custom environments and integrations.
The third point was the confidence implementers can have in this proven set of standards, regardless of whether their own operation crosses borders with other jurisdictions or not. Many thousands of implementers have self-certified to FAPI in these jurisdictions, and the OpenID Foundation is proud to partner with many of these jurisdictions, both those that are private sector led and those that are government led. We expect the number of jurisdictions moving to adopt FAPI to grow in line with global adoption of open banking and open data regulation and best practices.
Last, Hodges and Heenan underscored the rationale for banks, fintechs and aggregators, all who can benefit from implementing FAPI in North America:
Key decision points FDX implementers in North America may considerFor those in North America actively conducting due diligence on what they may need to do to conform to US or Canadian regulation, Hodges and Heenan provided a generic decision tree to help them in their analysis.
How to enable FAPI 2.0 From OAuth 2.0
Heenan offered implementers a simple playbook if they have OAuth2.0 enabled and wish to now enable FAPI 2.0:
Implement secure client authentication private_key_jwt or MTLS Implement sender constrained tokens DPoP or MTLS Implement Pushed Authorization Requests (IETF RFC9126) Implement PKCE (IETF RFC7636)Any implementer can start now by building to the freely available, open source tests at the OpenID Foundation. They can even self-certify as members ($1k) or non-members ($5k). The full resource information was provided to participants as published on the OpenID Foundation’s website:
FAPI 2.0 Final Specification: https://openid.net/fapi-2-security-profile-attacker-model-final-specifications-approved/ Source code publicly available on gitlab: https://gitlab.com/openid/conformance-suite Instructions for testing/certifying: https://openid.net/certification/instructions/ Production deployment: https://www.certification.openid.net/ Login with any google/gitlab/openid account Going forward togetherThe OpenID Foundation welcomes FDX’s Security and Authorization WG/ FAPI WG collaboration on the emerging FDX profile containing FAPI 2.0 and hopes that together our mutual members will benefit from a continued and deepening strategic relationship.
One area of mutual interest is to consider running an interoperability event amongst early North American implementers of the blue profile containing FAPI 2.0. Those interested in being part of the interop should contact director@oidf.org.
More broadly the OpenID Foundation looks forward to supporting the Financial Data Exchange’s technical roadmap. We also look forward to sharing our insights on the role that RAR and Grant Management are starting to play in other markets like Australia and the UK, and how such fine granted authorization could be of value in North American markets as well. Together we hope to offer useful facts that can help inform the due processes and multi-stakeholder discussions in North America.
About the OpenID FoundationThe OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, FAPI has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling ‘networks of networks’ to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.
The post OpenID Foundation presents at Financial Data Exchange Summit first appeared on OpenID Foundation.
Watch full recording on YouTube.
Status: Verified by PresenterPlease note that ToIP used Google NotebookLM to generate the following content, which the presenter has verified.
Google NotebookLM Podcast
https://trustoverip.org/wp-content/uploads/2025-05-01-Agri-food-Data-Canada-Carly-Huitema-1.wav SummaryThis briefing document summarizes a presentation about Agri-food Data Canada’s Semantic Engine, a suite of tools designed to enhance research data management in the agri-food sector by making data Findable, Accessible, Interoperable, and Reusable (FAIR). A central focus is the use of machine-readable data schemas authored with the Overlays Capture Architecture (OCA) standard, which is highlighted for its use of derived identifiers (digests) over traditional assigned identifiers for improved reproducibility and authenticity. The document also details the Semantic Engine’s practical tools and ongoing efforts to integrate these standards into existing research infrastructure, addressing challenges like data context and decentralized ecosystems.
Briefing Document: Agri-food Data Canada and the Semantic EngineDate: May 1, 2025
Subject: Review of Agri-food Data Canada (ADC) project and its Semantic Engine, with discussion on data schemas, identifiers, and integration into research infrastructure.
Sources:
Excerpts from “2025.05.01-ToIP_EGWG.pdf” (Slides) Excerpts from “GMT20250501-145814_Recording.transcript.txt” (Transcript) Excerpts from “GMT20250501-145814_Recording_1920x1080.mp4” (Video – used for verification of content and speakers) Excerpts from “GMT20250501-145814_RecordingnewChat.txt” (Chat Log)Attendees/Speakers: Carly Huitema (University of Guelph, ADC), Michelle Edwards (ADC, mentioned), Eric Drury (Forth Consulting), Scott Perry (Digital Governance Institute), Neil Thomson (QueryVision), Steven Milstein (Collab.Ventures), Donald Sheppard.
OverviewThis briefing summarizes a presentation by Carly Huitema from Agri-food Data Canada (ADC) to the Trust over IP (ToIP) Ecosystem and Governance Working Group. The presentation focuses on ADC’s efforts to improve research data management in the agri-food sector, specifically through the development of the “Semantic Engine” suite of tools. A central theme is the importance of “FAIR” data (Findable, Accessible, Interoperable, Reusable) and how machine-readable data schemas, particularly using the Overlays Capture Architecture (OCA) standard, contribute to achieving this goal. The discussion also highlights the advantages of derived identifiers (digests) over assigned identifiers for reproducibility, authenticity, and decentralization, and ADC’s ongoing work to integrate their tools and schemas into existing research infrastructure.
Key Themes and Important Ideas Improving Research Data Management in a Decentralized Ecosystem: The research data ecosystem is described as highly decentralized with independent research groups. While guided by best practices, mandates for standardized approaches are often slow to adopt. Incentives can be conflicting, particularly the “publish or perish” culture versus the time needed for thorough data documentation. Long-term planning (e.g., 50-year repository funding) is a crucial consideration. ADC, a project at the University of Guelph funded by multiple Canadian sources (CFREF, Genome Canada, UoG, OMAFA, Compute Ontario, etc.), aims to address these challenges by working directly with researchers. Making Agri-food Data FAIR: A core objective of ADC is to make agri-food data FAIR: Findable: Ability to identify and locate data resources and their context. Accessible: Ability to access (with permission) and use data once found, often requiring open protocols. Interoperable: Using standards for data to ensure compatibility, including standard vocabularies. Reusable: Data with sufficient context (licenses, provenance, descriptions) can be reused by others for replication or new research. Carly Huitema states: “Findable is the ability to identify and find resources as well as their context. If it’s accessible that once found you can access it with permission and use it interoperable. Certainly lots of our work at Trust Over IP is about how to ensure interoperability of standards including vocabularies and reusable that that there are licenses and provenance and other things that help make this data reusable.” Data Requires Context: Data alone is insufficient; it needs context to be useful. This context includes details like sample source, analysis methods, data schemas, catalogue information, data licenses, data governance agreements, associated publications, methodologies, scripts, and contributors. The Semantic Engine: ADC is developing the Semantic Engine, a suite of tools designed to help researchers create “rich contextual and machine-readable data schemas.” The Semantic Engine aims to make the process of documenting data less daunting for researchers. It functions as a self-teaching web app, providing guidance and tutorials. The engine uses the Overlays Capture Architecture (OCA) standard for writing schemas. Data Schemas and Overlays Capture Architecture (OCA): A data schema describes the attributes of a dataset (e.g., columns in a table) and provides detailed information about them (type, units, description, format, etc.). OCA is highlighted as an international and open standard for documenting schemas, developed by the Human Colossus Foundation. Two key advantages of OCA: Embeds Digests: OCA schemas can embed derived identifiers (digests/fingerprints) for the schema itself and for its constituent parts. This is crucial for reproducibility and authenticity of digital artifacts. Organized by Features: OCA structures schemas by features (e.g., all descriptions, all units) rather than attribute by attribute (e.g., JSON-LD, XML Schema). This organization offers advantages: Task-based Governance: Allows for governance at the feature level (e.g., assigning responsibility for translation features). Optimized for Feature Management: Facilitates adding or removing features (like languages or units) without altering the identifiers of other features. Mix-and-Match: Enables easier combination and reuse of different schema components. ADC has developed an “OCA Package” which wraps the core OCA standard with extensions for community-specific features and developing standards, allowing for gradual migration of these features to the core standard as they become accepted. Assigned vs. Derived Identifiers: Assigned Identifiers (Names): Created by a governance body, linked to an object via a lookup table. Resolution requires trusting the authoritative governance body and their lookup table. “If you find an object, you cannot figure out the identifier – you must go to the authoritative body and look it up in their table.” Resolution services can only be hosted or delegated by the governance body. Derived Identifiers (Digests/Fingerprints): Calculated directly from a digital object using a hashing function. They are unique fingerprints for a specific version of the object. Key for reproducibility and authenticity: “You can identify the resource originally used. You can verify the resource is the same one that was originally used.” Anyone can calculate a derived identifier, build a resolution service, and verify the resolution service is pointing to the correct object. Derived identifiers enable objects to be hosted in multiple locations. Carly Huitema humorously quotes, “If you liked it, then you should have put a digest on it.” Derived identifiers are excellent for snapshots but do not handle dynamic content or versioning directly. Versioning requires a governing authority or a decentralized identifier (DID) system where subsequent versions are linked and controlled. Tools Provided by the Semantic Engine: Schema Authoring Web App: Guides researchers through creating machine-readable schemas. Data Entry Excel Generator: Creates an Excel spreadsheet with headers and schema descriptions based on the authored schema, helping standardize data collection. Includes the schema’s derived identifier. Data Entry on the Web / Data Verification Engine: A tool to verify data sets against the rules defined in a schema. Allows researchers to quickly check for inconsistencies before combining data from multiple sources. Integration into Research Infrastructure: ADC is working to integrate their schemas and tools into existing Canadian and international research infrastructure. Schemas can be deposited into long-term research data repositories (e.g., Borealis in Canada), often receiving assigned identifiers like DOIs. These schemas, with their embedded derived identifiers, can then be found through federated search engines that index multiple repositories (e.g., Lunaris in Canada, OpenAIRE in Europe). This allows researchers to publish papers referencing schemas by their identifiers, enabling others to find and verify the schema used. Addressing IP and Sensitive Data: The Semantic Engine itself does not store user data or schemas, reducing IP concerns related to the platform. Schemas are generally less sensitive than the actual data, allowing them to be more openly shared. This enables discovery of datasets and potential collaborations without exposing proprietary information. Schemas can include flags for sensitive data attributes (e.g., farm location). While ADC’s tools don’t currently enforce access controls based on these flags, this information in the machine-readable schema can be used by internal pipelines or other systems to manage sensitive data appropriately (e.g., triggering anonymization). Future Directions: ADC plans to continue integrating with research infrastructure, add digests as identifiers to more objects, and develop tools for other machine-readable standards (e.g., cataloging metadata, policy rules). They also aim to increase the number of features supported in the schema description process (e.g., range rules, ontology framing). Key Takeaways for ToIP The FAIR data principles are highly relevant to decentralized ecosystems and align well with ToIP goals. Derived identifiers (digests) offer significant advantages for reproducibility, authenticity, and decentralized resolution, making them a powerful tool for digital objects within a trust framework. The architecture of data schemas (feature-by-feature vs. attribute-by-attribute) has implications for governance, versioning, and the application of derived identifiers to schema components. Integrating decentralized identity and verifiable credentials concepts (like OCA) into existing research infrastructure can enhance discoverability, interoperability, and trust in scientific data. The Semantic Engine provides a practical example of building user-friendly tools to generate machine-readable metadata, addressing the challenge of widespread adoption of such standards.For more details, including the meeting transcript, please see our wiki 2025-05-01 Agri-food Data Canada – Carly Huitema – Home – Confluence
https://www.linkedin.com/in/carly-huitema-27727011/ https://www.semanticengine.org/The post EGWG 2025-05-01 Agri-food Data Canada – Carly Huitema appeared first on Trust Over IP.
Watch the full recording on YouTube.
Status: Verified by PresenterPlease note that ToIP used Google NotebookLM to generate the following content, which the presenter has verified.
Google NotebookLM Podcast
https://trustoverip.org/wp-content/uploads/ToIP-EGWG-2025-03-20_-Richard-Whitt-GliaNET-1.wavThis document and podcast were generated by Google’s NotebookLM. They provide information about Richard Whitt‘s vision for a more human-centric internet built on trust, as presented to the Ecosystem & Governance Working Group (EGWG) of the Trust over IP Foundation on March 20, 2025. It also draws from materials related to the GLIA Foundation and the GliaNet Alliance, founded by Richard Whitt.
The current state of the web is characterized by surveillance capitalism, where companies prioritize data extraction and manipulation, leading to a lack of trust. Richard Whitt argues that this undermines human agency and necessitates a shift towards a web built on trustworthy intermediaries.
His proposed solution centers around the concept of Net Fiduciaries, a new category of entities that would prioritize users’ interests through the application of fiduciary duties like care and loyalty, similar to professionals in medicine and law. This would be a voluntary approach, driven by ethical considerations and good business practices, rather than imposed regulations on existing platforms.
The GliaNet Alliance is a coalition of companies and organizations committed to this vision. Its goal is to build a “web worthy of trust” by fostering ethical technology practices and using transformative governance principles. The alliance operates as a community of practice, with working groups focusing on areas like business models, policies, practices, and standards.
Key concepts discussed include:
The SEAMS cycle (Surveillance, Extraction, Analysis, Manipulation), which describes the problematic data practices prevalent on the web. Glea, the ancient Greek word for glue, symbolizing trust as the social glue. A multi-layered approach to change, encompassing governance, markets, technology (edge tech), and public policy. The importance of authentic personal AI agents that operate on behalf of the user, contrasting with the “double agent” nature of current AI assistants that primarily serve platform interests. The distinction between agenticity (capability) and agentiality (acting on behalf of) in AI systems. The potential for interoperability between AI agents across different platforms.The alliance is exploring ways to demonstrate trust to the public, potentially through analogies (like a “doctor for your web life”), marketing that emphasizes fiduciary duties, and clear communication about data handling practices. They are also considering mechanisms for recourse in case of breaches.
Richard Whitt’s book, “Reweaving the Web,” further elaborates on these ideas, outlining the problems with the current web and proposing practical steps to create a more user-centric digital future. The book has received positive testimonials from prominent figures in the tech and policy fields.
The GliaNet Alliance is in its early stages, focused on building its community, establishing governance structures, and exploring business models that align with its ethical principles. They are also engaging with policymakers and exploring potential collaborations, including with the Trust over IP Foundation. Consumer Reports is an anchor member and is exploring branding its AI agent as a GliaNet project. Kwaai.ai, an open-source LLM project, is also part of the alliance, aiming to build a platform with fiduciary duties to developers and agents.
For more details, including the meeting transcript, please see our wiki 2025-03-20 GliaNet – Home – Confluence.
https://www.linkedin.com/in/richardwhitt/ https://www.glia.net/The post EGWG 2025-03-20: Richard Whitt, GliaNET appeared first on Trust Over IP.
This content is password protected. To view it please enter your password below:
Password:
The post Protected: EGWG 2025-04-03: Stephan Wolf, Verifiable Trade Foundation appeared first on Trust Over IP.
As blockchain ecosystems evolve, interoperability between networks is becoming increasingly vital. During the 2023 Hyperledger Mentorship Program, I had the opportunity to contribute to this vision by developing a Polkadot connector for Hyperledger Cacti, a platform now hosted by LF Decentralized Trust that facilitates interactions across heterogeneous distributed ledgers.
From April 28-30, 2025, Stockholm became the epicentre of innovation in digital identity federation as SUNET welcomed 30 international delegates with 14 implementations to a groundbreaking OpenID Federation Interoperability event. The event brought together leaders from the private sector, government, not-for-profits, academic institutions, technical experts, and policy collaborators to prove out the Federation standard through live implementation interoperability testing – a pivotal milestone in the specification’s journey to become a final specification.
OpenID Federation is a breakthrough approach to federation that enables trust to be established between parties with no direct relationship between them by virtue of them belonging to a common federation. This modernizes the approach for establishing trust among parties for multiple purposes, including verifiable credential deployments, and seamless integration of multiple trust networks.
Hosted by SUNET (Swedish University Computer Network), this event was a rare opportunity to gather leading minds in identity federation into one room, to not only test the specs and identify potential improvements but to coalesce critical industry partners on realizing the potential of the specification to meet the requirements they have for ecosystems today. Participants came from Sweden, Finland, Netherlands, Italy, Germany, Denmark, Portugal, Poland, Serbia, Croatia, the UK, Brazil, Australia, New Zealand, and the United States!
“It was a pleasure to host the OpenID Foundation. The opportunity for our local community, particularly those in the public sector, to engage directly with international thought leaders was truly invaluable. We look forward to welcoming them again soon,” said Leif Johansson, SUNET.
1. ImplementationsDay one kicked off with participants aligning their implementations to the OpenID Federation 1.0 (draft 42) specification. Each team brought their own unique infrastructure, metadata configurations, and trust policies to the table, showcasing the diversity of approaches and the flexibility of the specification.
There was a real spirit of collaboration in the air. Teams from different countries worked in partnership to configure and test their implementations, capturing feedback on the specs, the open source tests, and highlighting real-world integration scenarios. For the participants with a strong EU focus, many explored how these building blocks can deliver on the EU Digital Identity Wallet ecosystem requirements.
Mike Jones, Board Member at the OpenID Foundation said: “There’s no substitute for getting developers and deployers together to collaborate and kick the tires together. People worked closely together over three days, many of whom and previously never met. We learned valuable things about our implementations, the specification, and the certification tests that will improve all of them. This puts us on a solid track to finish the specification and to have leading-edge ecosystems benefit from it.”
2. TestingAfter initial setup, it was time to stress-test the connections. The energy in the room was palpable with mixed groups reviewing and debating code whilst viewing shared terminals, debugging issues, and huddling spontaneously at whiteboards. Through structured testing and ad-hoc exploration, participants uncovered edge cases that could inform the underlying specifications and issues that could improve the tests, but crucially they could also see where their own implementations could be improved. Eight different classes of tests were performed among the 14 implementations.
Roland Hedberg, lead editor of the Federation specification, said: “Three very intensive and fun days. I was so impressed by the number of implementers present and their willingness to put their work up for evaluation. One thought that struck me was the underlying pervasive feeling that it’s now not a question about if people will use OpenID Federation, but rather when they will start.”
3. ResultsThe interop delivered measurable progress for everyone involved. Participants left with working federation configurations, validated trust chains, and actionable insights into specification gaps or areas of ambiguity. It was a technical leap forward that only face-to-face interoperability and cross-implementor collaboration can achieve.
Figure 1: Portrait of the Federation Entities involved during the interop event.
While challenges were expected, the collective learning has now given the OpenID community clear visibility on the path to a final specification.
Dima Postnikov, representing ConnectID, said, “Trust management and discovery are now recognized as essential building blocks for anyone who runs ecosystems. This interoperability event has clearly shown that the OpenID Federation specification has reached the level of maturity required for global adoption, as demonstrated by the number of active participants representing real ecosystems and software solutions. OpenID Federation specification has a lot of potential for many more use cases and many different types of ecosystems.”
4. Lessons LearnedThe week’s testing surfaced several areas for improvement, both in the specification itself and in the surrounding tooling:
Clarification of policy requirements across federation operators. Improved debugging tools for tracking trust chain resolution. Diversity of algorithms used for signing Entity Statements.Henri Mikkonen, representing Shibboleth, said, “The event helped us to verify that after some fairly minor fine-tunings we’re already quite well interoperable with the latest draft of the spec. It means that we can start participating in pilots. We were very happy to see that the OpenID Foundation’s conformance test suite is getting populated with tests for the federation draft too. We’re very happy to be early adopters for those tests as they’ve been super useful for testing standard OIDC features too. The good team spirit established in person means that we can continue to test interoperability remotely even after the event, crossing organizational boundaries.”
All feedback gathered during the event will feed into upcoming working group discussions, reinforcing the event’s role not just in implementation, but in shaping the next steps for OpenID Federation.
5. Next StepsIn the weeks and months ahead, the group is focused on three clear next steps:
Publishing updated implementation guidance based on interoperability learnings in Sweden. Addressing feedback and finalising v1.0 of the OpenID Federation specification. Preparing for future interoperability events, including virtual sessions and additional in-person gatherings in the second half of 2025.Gail Hodges, Executive Director of the OpenID Foundation, said “Technologists that lead ecosystems face common challenges. The OpenID Foundation is honored that so many leading technologists have committed thousands of volunteer hours to develop the next generation in federation, and it is a personal joy to see them celebrating this milestone moment of interoperability and accomplishment. I cannot wait until we can celebrate the specification moving to final soon!”
As the work on OpenID Federation matures and ecosystem awareness builds, the OpenID Foundation expects deployment will ripple through projects, products, and jurisdictions around the world.
About the OpenID FoundationThe OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, FAPI has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling ‘networks of networks’ to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.
The post The OpenID Federation Interoperability Event first appeared on OpenID Foundation.
The National Cyber Security Centre said moving to digital passkeys to log on to Gov.UK was a vital step in making the tech more ubiquitous.
The Government has announced plans to replace passwords as the way to access Gov.UK, its digital services platform for the public.
In contrast to using a password and then an additional text message or code sent to a user’s trusted device – known as two-factor authentication – passkeys are unique digital keys tied to a specific device that proves the user’s identity when they log in without requiring them to input any further codes.
At the 2025 RSAC Conference in San Francisco, our team met with dozens of industry experts, cybersecurity professionals, and investors to find out more about the biggest security technologies and trends that are impacting your business.
Tech-Specific InnovationWhile AI was one of the hottest topics at the show, it wasn’t the only topic of discussion; we also heard a lot about the evolving ransomware ecosystem and what organizations need to be doing today to prepare for the arrival of “Q-Day”.
But perhaps the second-biggest discussion piece was around identity and access security.
With the rise of AI-powered deepfakes and fraud attempts, we’re seeing more need than ever before for organizations to make the switch from passwords to more secure methods of authentication, such as Passkeys—and many experts were optimistic that this space will see a lot of adoption over the next year.
Key Insights:
Andrew Shikiar, Executive Director and CEO of the FIDO Alliance: “We’re going to see Passkey deployment continue to grow in regulated industries. That’s really important, because addressing the higher assurance use cases and taking passwords out of play there will give greater confidence for more and more companies to deploy Passkeys at scale, which will further accelerate our journey towards putting passwords fully in the rear-view mirror.”Microsoft has officially shifted to passkeys, such as facial recognition, fingerprint scans, and PINs, as the default sign-in method for all new accounts beginning this month, marking its most significant step yet toward a password-free future, according to TechRepublic.
The move coincides with World Password Day and aligns with the tech giant’s broader commitment to the Passkey Pledge, an industry initiative to eliminate passwords in favor of more secure, phishing-resistant login methods. In a blog post, Microsoft executives Joy Chik and Vasu Jakkal emphasized that passkey users are three times more likely to log in successfully than those using passwords. Although existing account holders can still use passwords, Microsoft is nudging them toward using biometrics or PINs by default. Nearly all Windows users already rely on Windows Hello, and the shift is backed by support from industry partners, including Apple and Google, who are also rolling out FIDO-compliant passkey systems across their platforms. The change promises to streamline security and user experience across the board.
FIDO-Based Authentication to Replace SMS-Based Verification, Says UK NCSC
The U.K. government is set to replace SMS-based verification systems for digital services with passkeys this year in a bid to shore up cyber defenses.
The initiative will be rolled out by the U.K. National Cybersecurity Center using the open authentication standard Fast IDentity Online, or FIDO, as a more “secure and cost-effective solution.”
“The NCSC considers passkey adoption as vital for transforming cyber resilience at a national scale,” the NCSC said. “In addition to enhanced security and cost savings, passkeys offer users a faster login experience, saving approximately one minute per login when compared to entering a username, password and SMS code,” the agency said.
Zug, Switzerland (May 6, 2025) — Umanitek AG, a Swiss-based AI company combating harmful content and the risks of artificial intelligence, today announces the launch of their first product umanitek Guardian.
Umanitek’s mission is to fight against harmful content and the risks of AI by developing and deploying technology that serves the greater good of humanity.
Umanitek’s first product is an AI agent, umanitek Guardian, that uses the Decentralized Knowledge Graph (DKG), a decentralized, trusted network for organizing and tracking immutable data and allows participating organizations to keep ownership and control of their data while supporting database queries on a need-to-know basis — allowing collaboration without compromising privacy.
The first user of umanitek Guardian will be Aylo, who will leverage the agent to allow law enforcement agents to query 7 million hashes of its verified content using natural language through an AI agent.
“Umanitek acts as the bridge. Through Decentralized Knowledge Graph (DKG) decentralized infrastructure, we can integrate advanced Internet safety technologies directly with data. Umanitek Guardian will enable companies, law enforcement, NGOs and individuals to collaborate by uploading and querying “fingerprints” of images and videos to a decentralized directory. This system will help large technology platforms track, identify and prevent the distribution of harmful content. We are committed to developing human-centric AI solutions that promote trust, protect privacy and help make internet safety the standard in the age of AI.”
– Chris Rynning, umanitek Chairman
About umanitek
Making internet safety the standard in the age of AI.
Umanitek AG is a Swiss-based AI company combating harmful content and the risks of artificial intelligence. We develop human-centric AI solutions that promote trust, protect privacy and make internet safety the standard in the age of AI.
Our founders bring deep expertise in building reliable, trusted AI systems and are connected to global networks working to reduce internet harm, and are committed to raising awareness about the importance of education and digital responsibility in the age of AI.
Umanitek’s AI infrastructure is safe by design, open by principle and trustworthy by default. With a focus on ethical innovation, umanitek is setting the standards for transparency, accountability and harm reduction in artificial intelligence.
For more information about umanitek, umanitek’s founders and products, visit www.umanitek.ai.
Contacts
For media inquiries, please contact:
Umanitek Communication
umanitek launches umanitek Guardian AI agent was originally published in OriginTrail on Medium, where people are continuing the conversation by highlighting and responding to this story.
We are thrilled to announce that OwnYourData has been honored with the MyData Award 2025 in the Technology category by MyData Global. This recognition celebrates our commitment to developing human-centric data solutions that empower individuals and organisations with greater control over their information, enabling them to manage, share, and benefit from their data in transparent and ethical ways.
The MyData Awards acknowledge organizations and individuals making significant strides in ethical personal data practices across various domains, including technology, business, governance, and thought leadership. This year, over 400 nominations were evaluated, highlighting the growing global emphasis on data rights and individual empowerment.
Our award in the Technology category underscores our efforts in creating interoperable and privacy-focused tools that align with the principles of the MyData Declaration. We are especially honored that both the organisation OwnYourData and our chairperson, Christoph Fabianek, were recognized with MyData Awards. This dual recognition highlights not only our collective impact as a team but also Christoph’s individual leadership and long-standing commitment to building a fair, sustainable, and prosperous digital society.
For more details on the MyData Awards and the list of 2025 recipients, visit the official MyData Awards page.
Der Beitrag MyData Award 2025 erschien zuerst auf www.ownyourdata.eu.
The UK government is set to roll out passkey technology for its digital services later this year as an alternative to the current SMS-based verification system, offering a more secure and cost-effective solution that could save several million pounds annually.
What does it take to grow a brand on Amazon today?
The rules have changed, and Amazon is now less of a storefront and more of a dynamic ad and data platform.
In this episode, Jason Boyce, Founder and CEO of Avenue7Media, joins host Reid Jackson to share what he’s learned over 22 years in the e-commerce world. From selling as a reseller to building a brand and leading an agency, Jayson explains why success now depends on feeding the algorithm, not pitching to buyers.
He breaks down how streaming TV is driving direct Amazon sales, the problem with locked attributes and UPCs, and why attribution is finally catching up to the promise of connected TV.
In this episode, you’ll learn:
Why traditional retail rules don’t apply to Amazon
How Connected TV can accelerate brand growth
What makes Amazon profitable when compared to DTC
Jump into the conversation:
(00:00) Introducing Next Level Supply Chain
(02:53) From failed consulting to full-service agency
(06:26) Fixing locked attributes and bad UPC data
(09:59) Streaming ads that convert like e-commerce
(13:56) Selling to algorithms, not people
(16:34) Comparing Amazon to direct-to-consumer
(20:44) Why CTV is outperforming display ads
(26:48) How AI will change the way we shop
Connect with GS1 US:
Our website - www.gs1us.org
Connect with the guest:
Jason Boyce on LinkedInCheck out Avenue7Media
Internet Safety Labs’ Executive Director Lisa LeVasseur testified before the Maine Judiciary Committee in support of LD 1822, the Maine Online Data Privacy Act, while highlighting concerns. Informed by ISL’s 2022 K-12 Edtech safety benchmark and ongoing research, our testimony underscores the need to curb widespread commercial surveillance and risky data practices. LD 1822’s restrictions on sensitive data collection and sales are vital, yet we advocate for stronger protections, particularly for FERPA-covered data, non-profits, and medical apps. The written testimony is available to view in the pdf below:
The post Internet Safety Labs Provides Testimony in Support of LD 1822, An Act to Enact the Maine Online Data Privacy Act appeared first on Internet Safety Labs.
Date: April 16, 2026
Time: 9 am – 5 pm
Attendee Ticket: $49
Event Location:
The College of New Jersey
The post EdgeCon Spring 2026 appeared first on NJEdge Inc.
The FIDO Alliance announced today the launch of a new Payments Working Group (PWG) focused on developing and implementing FIDO authentication solutions specifically for payment use cases. This initiative marks a significant expansion of the organization’s efforts to eliminate password dependencies in critical digital transactions.
The new working group emerges at a time of growing momentum for passwordless authentication in the payments sector. Last year, Visa implemented passkeys for online payments, allowing customers to authorize transactions using biometric authentication rather than traditional passwords.
The Alliance, which now comprises over 250 members, has been steadily expanding its influence across various sectors of digital authentication.
World Password Day is no longer. The annual day to promote secure password practices has run its course, and the FIDO Alliance (whose stated mission, to be fair, is to bring the world beyond passwords) has rebranded World Password Day as World Passkey Day – an occasion to celebrate the encrypted FIDO credentials that combine data you possess (a digital key or credential) with a biometric trait (something you are, usually a face or fingerprint).
Anyone setting up a new Microsoft account will soon find they’re encouraged to use a passkey during the sign-up process.
Microsoft introduced passkey support across most of its consumer apps last year, allowing users to sign into their accounts without the need for 2FA methods or remembering long passwords. A year later, it’s removing passwords as the default and encouraging all new signups to use passkeys.
PCMag attempted to sign up for a new Microsoft account on May 2, but it still asked for a password at the time of publication. Microsoft hasn’t shared an exact timeframe for when the change will take place, but you should expect it to happen in the coming days.
This is the first time a new account can be entirely passwordless. Previously, it had to have one alongside your passkey.
In a blog post, Microsoft says 98% of passkey attempts to log in are successful, while passwords are only at 32%. Microsoft is also introducing what it calls a “streamlined” sign-in experience for all accounts that “prioritizes passwordless methods for sign-in and sign-up.” It means some UX design changes to highlight passkey functionality.
We’re excited to announce a new partnership with Puentes, an organization dedicated to strengthening the narrative power of social justice movements in Latin America
The post Empowering narratives, strengthening ecosystems: A partnership for digital resilience appeared first on The Engine Room.
WAO is currently working on a project with the Responsible Innovation Centre for Public Media Futures (RIC), hosted by the BBC. Our focus is on AI Literacy for 14–19 year olds and you can read more context in our project kick-off blog post.
One of the deliverables for this project is to review AI Literacy frameworks, with a view to either making a recommendation, or coming up with a new one. It’s not just a case of choosing one with a pretty diagram!
Frameworks are a way in which an individual or organisation can indicate what is worth paying attention to in a given situation. Just as the definition of ‘AI Literacy’ varies by context, the usefulness of a framework depends on the situation. In this post, we share the judgements we made using criteria we developed and share our process in case it is useful for your own work.
Narrowing down the listWhile there can be some commonality and overlaps between frameworks for different contexts, the diversity of possible situations is huge. There can never be a single ‘perfect’ framework suitable for every situation. For example, just imagine what ‘AI Literacy’ might look like for (adult) engineers and developers compared with children of primary school age. As with our work at Mozilla you can define what a ‘map’ of new literacies might look like, but it can only ever be one of many that describe the overall ‘territory’.
With our work on this project, we had to bear in mind our audience (14–19 year olds) and the mission of the BBC. There is a long history of Critical Media Literacy which is particularly relevant to our research here, and which was one of the factors when reviewing frameworks.
With a relatively short project timeline of three months, we needed a way to quickly classify approximately forty frameworks and related documents we have collected. We shared relevant details with Perplexity AI (using the Claude 3.7 Sonnet model) over multiple conversations. This helped us reduce our initial list of around 40 frameworks to a more manageable 25.
Coming up with criteriaNext, we came up with some criteria by which to judge them. These criteria were informed by our own work in the area for 15+ years, along with either interviews or surveys with over 35 experts in the field. While these criteria are meant as a heuristic for this project, they are also a useful starting point for asking questions about any project relating to new literacies.
Definition of AI — ensures everyone has the same starting point Development process — adds transparency and credibility Target audience — helps match the framework to its users Real-world relevance — shows how ideas work in practice AI safety and ethics — addresses both risks and responsible use Skills and competencies listed — clarifies what learners should be able to do Reputable source — increases trust in the frameworkWe included both safety and ethics because both are needed for using AI in a responsible and trustworthy way.
Categorising the most relevant frameworksWe used a traffic light (red/yellow/green) categorisation system to score each framework on the above criteria. Only one of the frameworks we reviewed, the OECD Framework for Classifying AI Systems, meets all criteria with a ‘green’ rating.
There are several other frameworks which we judge as meeting the criteria as ‘green’ except for one criterion (‘yellow’). Listed alphabetically by organisation, these are:
Artificial Intelligence in Education (Digital Promise) AI Literacy in Teaching and Learning: A Durable Framework for Higher Education (EDUCAUSE) Digital Competence Framework for Citizens (European Commission) Developing AI Literacy With People Who Have Low Or No Digital Skills (Good Things Foundation) AI competency framework for students (UNESCO)There are other frameworks which we have decided to include which included two or more criterion as ‘yellow’. For example, the Open University’s Critical AI Literacy Framework, Ng, et al’s article, and Prof. Maha Bali’s blog post linked from her framework all do a good job of defining Critical AI Literacies. We would also note that the Digital Education Council’s list of skills and competencies relating to AI Literacy is useful to pair with those from EDUCAUSE, UNESCO, and the European Commission.
Next stepsAs mentioned earlier, our brief for this project involves either making an informed recommendation of a framework, or to come up with our own. We’re currently leaning toward the latter, but either choice will be the subject of a future blog post.
If you have questions, concerns, comments, or indeed a particularly useful resource which you think would be useful for this project, please do get in touch. You can leave a comment here, or use the contact details on our main website to get in touch!
What makes for a good AI Literacy framework? was originally published in We Are Open Co-op on Medium, where people are continuing the conversation by highlighting and responding to this story.
After supporting passwordless Windows logins for years and even allowing users to delete passwords from their accounts, Microsoft is making its biggest move yet toward a future with no passwords. Now it will ask people signing up for new accounts to only use more secure methods like passkeys, push notifications, and security keys instead, by default.
The new no-password initiative by Microsoft is accompanied by its recently launched, optimized sign-in window design with reordered steps that flow better for a passwordless and passkey-first experience.
Although current accounts won’t have to shed their passwords, new ones will try and leave them behind by not prompting you to create a password at all:
As part of this simplified UX, we’re changing the default behavior for new accounts. Brand new Microsoft accounts will now be “passwordless by default.” New users will have several passwordless options for signing into their account and they’ll never need to enroll a password. Existing users can visit their account settings to delete their password.
With today’s changes, Microsoft is renaming “World Password Day” to “World Passkey Day” instead and pledges to continue its work implementing passkeys over the coming year. This time last year, the company implemented passkeys into consumer accounts. Microsoft says it’s seeing “nearly a million passkeys registered every day,” and that passkey users have a 98 percent success rate of signing in versus 32 percent for password-based accounts.
As you’ll already know, today is World Passkey Day and the FIDO Alliance has released an independent study of over 1,300 consumers across the US, UK, China, South Korea, and Japan to understand how passkey usage and consumer attitudes towards authentication have evolved.
The results are encouraging, they find 74 percent of consumers are aware of passkeys and 69 percent have enabled passkeys on at least one of their accounts.
Among those who have used passkeys, 38 percent report enabling them whenever possible. More than half of consumers now believe passkeys are both more secure (53 percent) and more convenient (54 percent) than passwords.
This increase in passkey adoption is likely driven by the shortcomings of passwords. Last year, over 35 percent of people had at least one of their accounts compromised due to password vulnerabilities. In addition, 47 percent of consumers will abandon purchases if they have forgotten their password for that particular account.
To further encourage organizations to embrace the shift to passkeys, the FIDO Alliance has also launched the Passkey Pledge, a voluntary pledge for online service providers and authentication product and service vendors committed to embracing passkeys.
“The establishment and growth of World Passkey Day reflects the fact that organizations of all shapes and sizes are taking action upon the imperative to move away from relying on passwords and other legacy authentication methods that have led to decades of data breaches, account takeovers and user frustration, which imperil the very foundations of our connected society,” says Andrew Shikiar, executive director and CEO of the FIDO Alliance. “We’re thrilled by the fact that over 100 organizations around the world signed our Passkey Pledge, and we are pleased to support the market in their march towards passkeys through a variety of freely available assets, including our market-leading Passkey Central resource center.”
The full report is available from the FIDO Alliance site.
As the first-ever World Passkey Day replaces the traditional World Password Day, Microsoft joins the FIDO Alliance in celebrating a milestone achievement: over 15 billion online accounts now have access to passwordless authentication through passkeys.
This significant shift marks a turning point in digital security as the tech industry moves decisively away from vulnerable password-based systems.
“The establishment and growth of World Passkey Day reflects the fact that organizations of all shapes and sizes are taking action upon the imperative to move away from relying on passwords and other legacy authentication methods,” said Andrew Shikiar, Executive Director and CEO of the FIDO Alliance.
Microsoft is on a mission to delete passwords for a billion users, given that “the password era is ending.” The Windows-maker warns users that “bad actors know it, which is why they’re desperately accelerating password-related attacks while they still can.” And those attacks are now making headlines weekly.
The answer is passkeys, which link your account security to your physical device security, which means unless an attacker has access to your hardware and unlock method — biometric or PIN, they can’t bypass a password to login.
More than others, Microsoft is not just promoting passkeys but also password deletion: “If a user has both a passkey and a password, and both grant access to an account, the account is still at risk for phishing. Our ultimate goal is to remove passwords completely and have accounts that only support phishing-resistant credentials.”
The FIDO Alliance, the organization charged with promoting passkeys has taken to the internet airwaves this time around to “launch a Passkey Pledge to further accelerate [the] global movement away from passwords.”
Its latest research found that “over 35% of people had at least one of their accounts compromised due to password vulnerabilities, [and] 47% of consumers will abandon purchases if they have forgotten their password for that particular account. This is significant for passkey adoption, as 54% of people familiar with passkeys consider them to be more convenient than passwords, and 53% believe they offer greater security.”
FIDO has welcomed Microsoft’s password deletion as industry leading. “This is an exciting and seminal milestone as Microsoft is taking passwords out of play for over a billion user accounts,” its CEO Andrew Shikiar told me, “who can now instead leverage user-friendly, phishing-resistant passkeys. Microsoft’s leadership in doing so today will help encourage more service providers to do the same, which moves us collectively closer to the day when passwords are fully in our rear-view mirror.”
The FIDO Alliance has also recently invited companies to participate in the World Passkey Pledge to create a more secure future, and move past the vulnerability and hassle of passwords.
Simon McNally, Cybersecurity Expert at Thales said, “Passwords have long been a weak link in digital security, forcing consumers and businesses into a frustrating cycle of password resets and potential breaches. We welcome the FIDO Alliance’s commitment to World Passkey Day and its push for a passwordless future. Passkeys provide a seamless and secure authentication experience, eliminating the risks and frustrations associated with traditional passwords.
Passkeys are automatically generated and securely stored, removing the burden of creating and managing complex passwords. They also enhance privacy by allowing authentication without sharing sensitive data, reducing the risk of breaches. As trust in digital security becomes more critical, businesses must prioritise passwordless solutions to protect users and build brand confidence.”
The Passkey Pledge for a Passwordless FutureTo commemorate World Password Day (or at it will henceforth be known, World Passkey Day), the FIDO Alliance has released a survey on the usage of passkeys which found that 74% of consumers are aware of passkeys, meaning that consumers are aware of the potential value a passkey login experience can bring. To support this, the survey also found that 69% of consumers have enabled passkeys on at least one of their accounts.
Furthermore, for those who have used passkeys, 38% report enabling them whenever possible suggesting that some consumers already see the added user experience and security benefits passkeys bring. In fact, more than half of consumers believe passkeys are both more secure (53%) and more convenient (54%) than passwords. Many businesses and organizations have already signed the Passkey Pledge, including Amazon, Apple, Google, Microsoft, Samsung, and many more!
A pivotal momentAndrew Shikiar, executive director and CEO of the FIDO Alliance, commented on both the recent survey, and the Passkey Pledge:
“This year’s World Passkey Day comes at a pivotal moment for user authentication around the world – with a rapidly growing number of service providers (including nearly half of the world’s top 100 websites) offering billions of user accounts the option to sign in with passkeys instead of passwords. Well over 100 organizations have taken the Passkey Pledge, indicating their commitment towards a future free from the risk and burdens of passwords.
Consumers are not only increasingly aware of passkeys, they’re using them more frequently: 69% of respondents to our recent survey are enabling them on at least one account, and 38% are now enabling them whenever possible.
Passkeys are so intuitive to use that once users integrate passkeys, they rarely go back. This is good for consumers who are frustrated by password reliant sign-in processes — 35% of whom said they experienced account compromises as a result of password vulnerabilities last year — and e-commerce retailers alike.
This shift isn’t just about innovation or bottom lines; it’s about rebuilding digital trust and creating a safer, more efficient internet for everyone.”
NEWARK, NJ, May 2, 2025 – Dr. Forough Ghahramani, Assistant Vice President for Research, Innovation, and Sponsored Programs, Edge, Chair of Broadening the Reach Working Group for the Ecosystem for Research Networking (ERN), and ERN Steering Committee member, welcomed an esteemed group of scientific and cyberinfrastructure researchers as co-chair of this year’s ERN Summit.
The Ecosystem for Research Networking Summit provides the scientific and cyberinfrastructure research community an opportunity to come together and discuss ERN mission and accomplishments, hear from domain researchers and CI professionals at smaller institutions about the successes and challenges related to leveraging local, regional, and national resources for projects, and learn about funding resources and partnership opportunities, as well as regional and national communities.
The Summit was a virtual event held on April 23, 2025, and reflected a shared commitment to building bridges across scientific and cyberinfrastructure communities and exploring the timely topics of advanced technologies, artificial intelligence (AI), quantum, and workforce development. Attendees heard from thought leaders who are driving AI and quantum initiatives, shaping curriculum innovation, and expanding opportunities across institutions of all sizes. Panel discussions examined the unique constraints and opportunities facing non-R1 institutions, along with the curricular innovations and partnerships shaping the future quantum workforce.
“The ERN Summit sparked dynamic conversations and showcased the perspectives of vital contributors to our national research ecosystem, including leading institutions, regional hubs, industry partners, and smaller colleges. We hope the Summit inspired new collaborations, actionable ideas, and lasting connections throughout the research and education community.”
– Dr. Forough Ghahramani
Assistant Vice President for Research, Innovation, and Sponsored Programs, Edge
Chair of Broadening the Reach Working Group, ERN
Steering Committee Member, ERN
Keynote speaker, Dan Stanzione, Ph.D., Associate Vice President for Research, Executive Director, Texas Advanced Computer Center (TACC), led the topic, AI and HPC, or AI Ends HPC? where he discussed the evolving balance between AI and high-performance computing in future system designs. Dr. Stanzione argued that rather than replacing HPC, AI is transforming it, requiring the community to rethink architecture, software, workforce strategy, and environmental impact.
The Summit’s three panels provided insights from leading institutions, regional hubs, industry partners, and smaller institutions, including:
How regional economic and workforce initiatives are taking shape in AI and Quantum; How industry-academic partnerships are preparing the next-generation quantum workforce, and; What unique strengths and challenges are faced by non-R1 and smaller institutions in this landscape.“It was wonderful seeing the strong attendance across all panels. The discussions addressed pressing issues relevant to research and education communities in universities and colleges nationwide and internationally as we collectively tackle the challenges of AI and emerging quantum technologies. We will continue these discussions over the next year, leading to solutions that will be reported on during our next summit,” Barr von Oehsen, Ph.D., Director Pittsburgh Supercomputing Center and Chair of the ERN Steering Committee.
The Summit closed with rich dialogue on how the ERN can continue to build a thriving, inclusive community that supports institutions of all sizes and types. The energy and ideas shared set a clear path for future action! The ERN community is energized to keep the momentum going! Additional information is available at https://ern.ci.
About Edge
Edge serves as a member-owned, nonprofit provider of high performance optical fiber networking and internetworking, Internet2, and a vast array of best-in-class technology solutions for cybersecurity, educational technologies, cloud computing, and professional managed services. Edge provides these solutions to colleges and universities, K-12 school districts, government entities, hospital networks and nonprofit business entities as part of a membership-based consortium. Edge’s membership spans the northeast, along with a growing list of EdgeMarket participants nationwide. Edge’s common good mission ensures success by empowering members for digital transformation with affordable, reliable and thought-leading purpose-built, advanced connectivity, technologies and services.
The post Ecosystem for Research Networking (ERN) Summit 2025 appeared first on NJEdge Inc.
Date: January 15, 2026
Time: 9 am – 5 pm
Attendee Ticket: $49
Event Location:
Princeton University
For those requiring overnight accommodations while attending EdgeCon Winter 2025, a Group Rate has been arranged for attendees at: Nassau Inn »
10 Palmer Square, Princeton, NJ 08542
Or, for Attendees who want to make reservations over the phone, please call the Nassau Inn Reservation Desk directly at 609-921-7500 and reference the booking #4766909 in order to receive the group rate. This rate is only valid until 12/14/25.
More information coming soon!The post EdgeCon Winter 2026 appeared first on NJEdge Inc.
Hyperledger Fabric has long been a cornerstone of enterprise blockchain, effectively addressing the deficiencies and governance challenges of public blockchain systems. Designed for modularity, scalability, and effective governance, it laid the groundwork for enterprise blockchain applications where trust, compliance, and performance are paramount.
May 1, 2025
In recognition of World Passkey Day (formerly World Password Day), the FIDO Alliance is putting the spotlight on real-world passkey deployments from leading organizations around the globe. Read on for highlights of the successes companies in various industries are seeing from delivering faster, easier, and more secure sign-ins with passkeys—showcasing the global commitment to move away from passwords.
The FIDO Alliance also released today an independent study of consumers to understand how passkey usage and consumer attitudes towards authentication have evolved. The research found that in the last year, over 35% of people had at least one of their accounts compromised due to password vulnerabilities. In addition, 47% of consumers will abandon purchases if they have forgotten their password for that particular account. This is significant for passkey adoption, as 54% of people familiar with passkeys consider them to be more convenient than passwords, and 53% believe they offer greater security. The full report is available here.
ABANCA’s mobile app serves over 1,200,000 customers a month, serving as the bank’s largest branch. Today, more than 42% of its mobile customers are using passkeys via the bank’s ABANCA Key product. As a result, more than 11,000,000 high-risk transactions have been protected without technical or service incidents, and due to the prioritization of UX, they have managed a Customer Effort Score (CES) of 4.7.
Aflac was the first major insurance company to adopt passkeys in the U.S. Aflac partnered with Transmit Security to launch their passkey authentication initiative Today, only the first phase of the project is complete and yet more than 500,000 Aflac customers have enrolled a passkey, resulting in a 32% reduction in password recovery requests. This has yielded 30,000 fewer calls per month to the call center for identity issues. Aflac reports that the highest enrollment rates occur at the point of registration, reinforcing the FIDO Alliance’s Design Guidelines recommendation to prompt customers during account-related tasks. The steady, organic adoption of passkeys by Aflac customers continues to grow daily and directly contributes to measurable improvements in cost reduction and customer experience.
KDDI now has more than 13.6 million au ID customers that use FIDO and has seen a dramatic decrease (down nearly 35%) in calls to their customer support center as a result. KDDI manages FIDO adoption carefully for both subscribers and non-subscribers.
LY Corporation property Yahoo! JAPAN ID now has 28 million active passkeys users. Approximately 50% of user authentication on smartphones now uses passkeys. LY Corporation said that passkeys have a higher success rate and are 2.6 times faster than SMS OTP.
Mercari has seen 9 million users enroll with passkeys, and is enforcing passkey login for users who have enrolled with synced passkeys. Notably, there have been zero phishing incidents at Mercari Shop and Mercoin (a Mercari subsidiary) since March 2023.
Microsoft began rolling out passkeys for Microsoft consumer account in 2024. They now see nearly one million passkeys registered every day. Microsoft has found that users signing in with passkeys are three times more successful at getting into their account than password users (about 98% versus 32%), passkey sign-ins are eight times faster than traditional password + MFA flows, and passwordless-preferred UX has reduced password use by over 20%.
Nikkei rolled out passkeys in February and is already seeing thousands of customers using passkeys. Additionally, they are seeing almost no inquiries about how to use passkeys at the support desk.
NTT DOCOMO has increased its passkey enrollments and now passkeys are used for more than 50% of authentication by d ACCOUNT users. NTT DOCOMO notably reports significant decreases in successful phishing attempts and there have been no unrecognized payments at the docomo Online Shop since September 2022 where NTT DOCOMO continuously improved UX, including increasing the number of other passkey-enabled services.
Samsung Electronics’ Galaxy smartphones support fast and convenient logins through biometric authentication and FIDO protocols. Due to ease of use, speed, compatibility across services, and status as an industry standard made passkeys a compelling choice for Samsung Electronics.
VicRoads is the vehicle registration and driver licensing authority in Victoria, Australia. It registers over six million vehicles annually and licenses more than five million drivers. Within the first weeks of deployment with its passkey vendor Corbado, passkey adoption significantly exceeded VicRoads’ expectations. Users embraced the phishing-resistant authentication method, benefiting from a frictionless login experience optimized for speed and security. The exceptionally high passkey activation rate – peaking at 80% on mobile devices and over 50% across all platforms – led to 30% passkey login rate within the first seven weeks. Uptake continues to rise steadily, translating into measurable operational benefits, including reduced authentication-related support tickets, lower SMS OTP costs and improved user experience and security.
Zoho Corporation has rolled out passkeys to its 100+ million zoho.com customers and has seen a resulting 30% increase month over month in passkey adoption and a 10% drop in password reset queries. As a next step, the company will begin its rollout to Zoho Vault customers in May.
Read the full case studies from ABANCA, Microsoft, Nikkei, Samsung Electronics, VicRoads and Zoho Corporation to learn more about how these companies are discovering the benefits of passkey adoption. To learn more about passkey implementation through other documented case studies, visit the FIDO Alliance’s resource library. Have a case study to share? Contact us!
New global survey: More than two thirds of users familiar with passkeys turn to them for simpler, safer sign-ins as password pain persists
MOUNTAIN VIEW, Calif., May 1, 2025 – With digital security more critical than ever, the FIDO Alliance is commemorating World Passkey Day 2025 with the release of an independent study of consumers across the U.S., U.K., China, South Korea, and Japan to understand how passkey usage and consumer attitudes towards authentication have evolved.
The research found that in the last year, over 35% of people had at least one of their accounts compromised due to password vulnerabilities. In addition, 47% of consumers will abandon purchases if they have forgotten their password for that particular account. This is significant for passkey adoption, as 54% of people familiar with passkeys consider them to be more convenient than passwords, and 53% believe they offer greater security.
World Passkey Day serves as the FIDO Alliance’s annual call to action for individuals and organizations to adopt passkey sign-ins, making the web safer and more accessible.
Highlights from the research show consumer passkey awareness is on the rise and outlines several key trends in adoption among those who are aware of passkeys, including:
74% of consumers are aware of passkeys. 69% of consumers have enabled passkeys on at least one of their accounts. Among those who have used passkeys, 38% report enabling them whenever possible. More than half of consumers believe passkeys are both more secure (53%) and more convenient (54%) than passwords.The survey report is available at https://fidoalliance.org/wpd-report-2025-consumer-password-passkey-trends/, which includes additional insights on how passkey adoption is trending with consumers and organizations to improve global digital access, authentication, and security.
“The establishment and growth of World Passkey Day reflects the fact that organizations of all shapes and sizes are taking action upon the imperative to move away from relying on passwords and other legacy authentication methods that have led to decades of data breaches, account takeovers and user frustration, which imperil the very foundations of our connected society,” said Andrew Shikiar, Executive Director and CEO of the FIDO Alliance. “We’re thrilled by the fact that over 100 organizations around the world signed our Passkey Pledge, and we are pleased to support the market in their march towards passkeys through a variety of freely available assets, including our market-leading Passkey Central resource center.”
To further encourage organizations to embrace the shift away toward passkeys, the FIDO Alliance also launched the Passkey Pledge, a voluntary pledge for online service providers and authentication product and service vendors committed to embracing passkeys. The passkey pledge has received commitments from over 100 organizations in just over 20 days. A full list of companies that have taken the passkey pledge can be found here.
The availability of passkeys has steadily increased with implementation reaching 48% of the world’s top 100 websites as enterprises and service providers collectively seek to embrace a new era of faster sign-ins, higher success rates, fewer account takeovers, lower support costs, and reduced cart abandonment.
To learn how to start your organization’s passwordless journey, or to begin using passkeys today, visit: https://www.passkeycentral.org/home.
Notes to editors: This SurveyMonkey online poll was conducted from April 13-14, 2025, among a global sample of 1,389 adults ages 18 and up. Respondents for this survey were selected from the nearly 3 million people who take surveys on the SurveyMonkey platform each day. Data for this survey has been weighted for age, race, sex, education, and geography to adequately reflect the demographic composition of the United States, United Kingdom, China, South Korea and Japan. The modeled error estimate for this survey is plus or minus 3.5 percentage points. To calculate the proportion of the world’s top websites and services that support passkeys, the FIDO Alliance combined publicly available information with its own data on passkey deployments. About the FIDO AllianceThe FIDO (Fast IDentity Online) Alliance was formed in July 2012 to address the lack of interoperability among strong authentication technologies and remedy the problems users face with creating and remembering multiple usernames and passwords. The FIDO Alliance is changing the nature of authentication with standards for simpler, stronger authentication that define an open, scalable, interoperable set of mechanisms that reduce reliance on passwords. FIDO Authentication is stronger, private, and easier to use when authenticating to online services. For more information, visit www.fidoalliance.org.
ContactIt feels fitting and yes, admittedly somewhat contrived, that our birthday falls on May 1st. It’s a moment when people around the world celebrate the power of collective action and the achievements of workers everywhere.
This year, our anniversary feels even more special. We’ve had our share of triumphs and certainly challenges, but nine years in (which is a long time in internet years!) we’re still here.
2025 has been named the United Nations International Year of Co-operatives, with the theme “Co-operatives Build a Better World.” The global co-operative movement deserves this spotlight, showing how co-ops like ours are making a difference by putting people and planet before profit.
https://ica.coop/en/cooperatives/facts-and-figuresAt We Are Open, we believe in doing business differently. We’re a worker-owned co-op, which means we make decisions together, share responsibility, and support each other to do meaningful work. Over the past nine years, we’ve learned (and re-learned!) that openness, collaboration, and trust are at the heart of what makes co-ops so important.
So today, we’ll down tools and raise a glass to our members, clients, friends, comrades in CoTech and workers.coop and the wider co-op community. Thanks for being part of our journey so far.
Solidarity and celebration from all of us at We Are Open! If you’d like to wish us happy birthday, or have a problem we may be able to help with, email us without delay at hello@weareopen.coop
We Are Nine was originally published in We Are Open Co-op on Medium, where people are continuing the conversation by highlighting and responding to this story.
May 2025
DIF Website | DIF Mailing Lists | Meeting Recording Archive
Table of contents Decentralized Identity Foundation News; 2. Working Group Updates; 3 Special Interest Group Updates; 4 User Group Updates; 5. Announcements; 6. Community Events; 7. DIF Member Spotlights; 8. Get involved! Join DIF 🚀 Decentralized Identity Foundation News DIF Labs Beta Cohort 2 RFP is here!Exciting news for identity innovators! DIF Labs has just announced their request for proposals for Beta Cohort 2, focused on driving high-leverage work at the intersection of identity, trust, and emerging technologies. The program seeks proposals from builders and researchers in five key focus areas: Personhood Credentials, Content Authenticity and Assertions, Applied Cryptography, Verifiable AI, and Industry-Aligned Applications. Project leads must be DIF members, with proposals due by May 20, 2025. Selected projects will receive mentorship, ecosystem support, and collaboration opportunities over a 2-3 month development period. For complete details on submission requirements, evaluation criteria, and the application process, visit the full announcement on the DIF Labs website.
DIF Hospitality & Travel Working Group is LiveThe Decentralized Identity Foundation (DIF) has launched a new Hospitality & Travel Working Group focused on developing standards for self-sovereign data exchange in the travel industry. This initiative will create frameworks allowing travelers to control their personal information while enabling seamless interactions with service providers. Led by chair Douglas Rice, the group will address traveler profiles, data portability, and interoperability across the travel ecosystem. For more details, see DIF's announcement:
DIF Launches Decentralized Identity Foundation Hospitality & Travel Working Group The Decentralized Identity Foundation (DIF) has officially launched its Hospitality & Travel Working Group, evolving from the ongoing H&T Special Interest Group (SIG). This new working group will focus on developing standards, schemas, processes, and documentation to support the self-sovereign exchange of data between travelers, services, intermediaries in the hospitality Decentralized Identity Foundation - BlogDecentralized Identity Foundation Algorand Releases the Universal Chess PassportAlgorand has partnered with World Chess to develop a "Universal Chess Passport" using blockchain-based digital identity and verifiable credentials technology. This innovative system will allow chess players to maintain portable digital identities and credentials across platforms, enabling them to seamlessly transfer their achievements, rankings, and records between online platforms and in-person tournaments. The proposal aims to address fragmentation in the chess ecosystem while maintaining privacy and security through decentralized identifiers (DIDs). A live discussion about the initiative will be held on May 6th featuring representatives from Algorand, World Chess, and the Decentralized Identity Foundation. For more details, visit Algorand's announcement page:
Algorand x World Chess - Universal Chess Passport Tired of rebuilding your chess identity across every platform? A new blockchain-based system from Algorand and World Chess proposes portable, verifiable credentials for chess players—bringing fairness, reputation, and rewards into one unified ecosystem. Universal Chess PassportAlgorand Foundation 🛠️ Working Group UpdatesBrowse our working groups here
Hospitality & Travel Working GroupThe newly-launched Hospitality & Travel Working Group started strong. They discussed their draft implementation guide and schema development, exploring the complexities of travel stages and verifiable data. They refined key terms in their glossary, including standardizing on the term "HATPro" (hospitality and travel profile). The team aims to have a substantial portion of the schema ready to announce at the HITEC Conference in mid-June.
Creator Assertions Working GroupThe Creator Assertions Working Group held productive meetings in both Americas/EU and APAC time zones this month. Discussions focused on identity claims, trust profiles, and collaboration with external groups including the JPEG Trust Group and Project Origin. The group is preparing three assertion specs for approval and considering integrating first-person credentials. The group will soon hold an election for co-chairs, with Scott Perry and Alex Tweeddale as candidates.
DID Methods Working GroupThe DID Methods Working Group finalized the DIF recommended methods process, agreeing to use "DIF Recommended" consistently throughout their documentation. They heard a detailed presentation on DID Peer version 4, exploring its features and advantages over other methods. The group also decided to dedicate full sessions to DID method deep dives as practice runs for their evaluation process.
Identifiers and Discovery Working GroupThe DID Traits team is preparing to release version 1.0 of their specification, focusing on making traits enable easier assessment. They're adding new traits including key validation and long-term availability, while improving terminology for clarity. The team removed software trade references due to complexity and expressed satisfaction with their progress toward the 1.0 milestone.
The did:webvh team is finalizing their 1.0 specification, including updates to examples and clarification of terminology. They addressed concerns about performance with large logs, cryptographic agility, and improving DID availability through watchers. The team is developing a JSON schema for data model validation and planning a test suite, while seeking acknowledgement from the DIF Technical Steering Committee for the 1.0 release.
🪪 Claims & Credentials Working GroupThe Credential Schemas team developed a new schema directory structure with community schemas, draft schemas, and recommended schemas folders to better organize their work. They refined the proof of age schema, including improvements to boolean threshold indicators and simplifying schema names for clarity and reuse. The team welcomed new members and discussed potential synergy with the Oasis working group.
Applied Crypto Working GroupThe BBS+ team focused on pseudonym implementations and designing approaches for everlasting unlinkability in credential systems. They considered trade-offs between post-quantum security and privacy, concluding that their baseline approach with pseudonyms offers preferable privacy protections. Team members are exploring the potential of using Rust instead of C++ for implementation and plan to interact more with the CFRG to advance the project.
DIF Labs Working GroupDIF Labs has launched its RFP for Beta Cohort 2. Read more here
DIDComm Working GroupThe DIDComm team discussed implementing binary encoding in the next version, proposing either a DIDComm version 3 with built-in binary encoding or adding optional seabore envelopes in version 2.2. They welcomed new member SwissSafe who expressed interest in contributing to standardizing CBOR for DIDcomm messaging. The team agreed to implement a flag designator in the header for different encodings.
If you are interested in participating in any of the Working Groups highlighted above, or any of DIF's other Working Groups, please click join DIF.
🌎 DIF Special Interest Group UpdatesBrowse our special interest groups here
The H&T SIG held multiple productive sessions in April. One meeting featured representatives from the European Digital Identity Wallet Consortium who presented on the EUDI wallet's development and potential applications in travel and hospitality. Another session discussed food model development, trip profile components, and context-specific preferences for travelers. The team continues refining their implementation guide and developing a comprehensive strategy for travel wallets that support personalization, with plans to showcase their schema at the HITEC Conference in mid-June.
DIF China SIG APAC/ASEAN Discussion GroupThe APAC/ASEAN call featured a presentation on the Verifiable Legal Entity Identifier (vLEI) and its relation to the Global Legal Entity Identifier Foundation. Steering Committee member Catherine Nabbala of Finema, a qualified vLEI issuer, detailed the vLEI ecosystem governance framework and verification processes. The group discussed implementation challenges including key management, wallet technologies, and potential applications in various industries. Participants showed particular interest in how vLEI could be integrated with traditional SSI architecture using did:webs
DIF Africa SIGThe DIF Africa SIG hosted a presentation by Karla McKenna on the Global Legal Entity Identifier Foundation (GLEIF) and Verifiable Legal Entity Identifier (vLEI). Karla discussed organizational identity in the digital world and potential applications of vLEI in various sectors. The meeting explored the process of becoming a qualified vLEI issuer and how such technology could benefit government and public services. Participants expressed interest in applying these concepts to streamline business processes, particularly in Africa.
DIF Japan SIGThe Japan SIG discussed NodeX's migration from Sidetree to the did:webvh method due to concerns about content-addressable storage stability, Bitcoin network performance, and Sidetree maintenance status. Technical discussions included DID structure, the resolve process, did:webvh method advantages, and IoT device authentication and trust. The team also explored global deployment strategies and trust framework implementation, particularly for IoT devices in critical infrastructure and energy sectors.
DIF Korea SIG 📖 DIF User Group UpdatesThe DIDComm User Group explored Colton's WebRTC project that enabled browser-to-browser voice communication over DIDComm. Members discussed binary encoding implementation options for DIDComm messages, focusing on CBOR while keeping the door open to other encodings. The group also demonstrated a chat system using DIDComm's mediator capabilities for seamless communication between users, even when offline. Future discussions will focus on use case documentation templates and integrating with AI-related protocols like the Model Context Protocol.
Veramo User Group 📢 Announcements at DIFConference season is kicking into high gear. Explore our Events calendar to meet the DIF community at leading Decentralized Identity, Identity, and Decentralized Web events.
🗓️ ️DIF Members👉Are you a DIF member with news to share? Email us at communication@identity.foundation with details.
🆔 Join DIF!If you would like to get in touch with us or become a member of the DIF community, please visit our website or follow our channels:
Follow us on Twitter/X
Join us on GitHub
Subscribe on YouTube
🔍
Read the DIF blog
New Member OrientationsIf you are new to DIF join us for our upcoming new member orientations. Find more information on DIF’s slack or contact us at community@identity.foundation if you need more information.
Passkeys are no longer just a concept: The future of sign-in is here and consumers are ready. Built on the open authentication standards developed by theFIDO Alliance, passkeys are quickly gaining momentum among global service providers. Why? Because they offer africtionless, phishing-resistant, passwordless sign-in experience that is redefining digital security and user convenience.
Ahead of World Passkey Day 2025, the FIDO Alliance commissioned an independent survey of1,389 peopleacross theU.S., U.K., China, South Korea, and Japan to provide additional insights into how authentication preferences are evolving in real time.
The research shows people continue to struggle with traditional passwords:
36%of respondents said they’ve had at least one account compromised due to weak or stolen passwords. 48%admitted they’ve abandoned an online purchase simply because they forgot their password.Read the full results of the survey in this eBook.
Download the Report Read the Press ReleaseApril 29, 2025 – The FIDO Alliance has launched a Payments Working Group (PWG) to define and drive FIDO solutions for payment use cases. The PWG will also act as subject matter experts and internal advisors within the FIDO Alliance on issues affecting the use of FIDO solutions for payment use cases. The PWG is co-chaired by Henna Kapur of Visa and Jonathan Grossar of Mastercard, with other FIDO Alliance member company participants including American Express; Cartes Bancaires; Futurae; Infineon; OneSpan; PayPal; Royal Bank of Canada – Solution Acceleration & Innovation; and Thales
The PWG will focus on three areas:
Identify and evaluate specific requirements for payment authentication. Requirements will include those in the area of UX, security and regulation unique to payments;. Identify and evaluate existing and emerging solutions to address payment authentication requirements; and Guidelines for use of passkeys and/or proposed FIDO solutions along with existing payment technologies such as EMV® 3-D Secure or EMV® EMVCo SRC.The PWG will also work on associated projects relating to the use of FIDO solutions for payments including: collecting and publishing deployment case studies, documenting challenges and potential solutions to issues; and working with FIDO Alliance liaison partners to drive education and adoption.
Join the Payments Working Group
Organizations interested in taking part in the PWG and driving the adoption of passkeys for payments can inquire today. Participation in the PWG is open to all Board, Sponsor, and Government level members of the FIDO Alliance. Non-member organizations interested in participating should contact the FIDO Alliance to become a member; learn more by visiting https://fidoalliance.org/members/become-a-member/.
NEWARK, NJ, May 1, 2025 – Dr. Forough Ghahramani, Assistant Vice President for Research, Innovation, and Sponsored Programs, Edge, and Vice President for Research and Collaborations, New Jersey Big Data Alliance (NJBDA), will join the 12th annual NJBDA Symposium at William Paterson University of New Jersey on May 16, 2025. This annual event is New Jersey’s premier conference for big data and advanced computing and attracts over two hundred attendees from industry, government, and academia.
This year’s theme, Empowered by AI: Innovation and the Rise of the Augmented Workforce, will allow attendees to explore the transformative power of AI and how this technology can empower workers to reach new levels of innovation and creativity. The full-day event will feature an industry panel discussing the impact of AI across multiple sectors, a workforce development panel on advanced technologies, a hands-on student workshop, and networking opportunities.
Breakout sessions include a Research Track Program led by Research Track Chair, Dr. Ghahramani, and will feature presentations on current applied research in Big Data, AI, and Machine Learning. The research session, AI Methods & LLM Innovation, will explore cutting-edge AI techniques including generative models, graph networks, and real-time Natural Language Processing (NLP) and will highlight the technical advancements driving the next wave of intelligent systems.
The session, AI in Health, Education, and Society, will explore how AI is shaping healthcare, education, and social systems, with a focus on inclusive, human-centered technologies that promote well-being and equity. In a separate session, AI in Industry, Infrastructure, and the Environment, attendees will engage with real-world examples of how AI is driving innovation in business operations, public infrastructure, and environmental sustainability.
“As we stand at the intersection of AI innovation and workforce transformation, this year’s NJBDA Symposium provides a vital forum to advance collaborative research, inform policy, and shape inclusive pathways for talent development. The research track will highlight groundbreaking work in generative AI, machine learning, and real-time analytics, showcasing the talent and innovation emerging across New Jersey’s academic institutions and their relevance to pressing societal and industry challenges.”
— Dr. Forough GhahramaniThree student externship teams will also share insights from the Data Science Curriculum Alignment and Articulation Agreement Pathways Project, a joint effort led by the Rutgers Master of Business and Science (MBS) Externship Program, the New Jersey Big Data Alliance, and NJ Pathways. The group investigated how data science programs at New Jersey community colleges align with those at four-year universities and explored ways to make the credit transfer process more seamless for students moving from associate to bachelor’s degree programs. Following the presentation, attendees can join a Q&A panel to further explore the students’ findings and experiences.
For event information and registration, please visit the NJBDA Symposium website and explore the opportunity to collaborate, expand your knowledge, and gain insights into the evolving role of intelligent technologies in shaping our future.
About Edge
Edge serves as a member-owned, nonprofit provider of high performance optical fiber networking and internetworking, Internet2, and a vast array of best-in-class technology solutions for cybersecurity, educational technologies, cloud computing, and professional managed services. Edge provides these solutions to colleges and universities, K-12 school districts, government entities, hospital networks and nonprofit business entities as part of a membership-based consortium. Edge’s membership spans the northeast, along with a growing list of EdgeMarket participants nationwide. Edge’s common good mission ensures success by empowering members for digital transformation with affordable, reliable and thought-leading purpose-built, advanced connectivity, technologies and services.
The post Dr. Forough Ghahramani to lead the Research Track at the 12th Annual NJBDA Symposium appeared first on NJEdge Inc.
As we kick off the second quarter of 2025, it's a good time to reflect on the incredible progress we've made together at LF Decentralized Trust (LFDT). LFDT was launched last fall to support a fast-evolving decentralized future through open source collaboration, with a focus on digital trust, interoperability, identity, and decentralized infrastructure. We hit the ground running, making 2024 a landmark year. And now 2025 is off to a fast start across our industry and certainly within the LF Decentralized Trust global community!
Kia ora,
Welcome to the April edition of the DINZ newsletter. As we navigate this period of transition, our commitment to advancing digital trust in Aotearoa remains unwavering. This month brings exciting updates on upcoming events, new members, and important industry developments.
Last Chance: DISTF Survey Closing Soon!
Don’t miss your opportunity to help shape the future of Digital Identity in New Zealand. Our Digital Identity Services Trust Framework (DISTF) survey closes next Monday, 5 May. Your insights will guide the DINZ DISTF Working Group’s priorities and advocacy efforts. This survey is open to both DINZ members and non-members. Your input is important as we work to maximise the benefits of this framework for all New Zealanders.
Take five minutes to complete the survey.
Digital Trust Hui Taumata Update
Planning for our landmark event is progressing well, with speakers now confirmed including Ministers Judith Collins and Scott Simpson, along with Deputy Privacy Commissioner Liz MacPherson. Additionally, we are well advanced with two terrific international keynote speakers thanks to DINZ member sponsors. We’re pleased to report that sponsorship commitments are tracking positively, reflecting the growing importance of digital trust in our national conversation.
Mark your calendars (12 August 2025, Te Papa, Wellington) for this premier industry gathering that will bring together leaders and innovators in the digital trust space. Register your interest here.
Member News
Kudos to MyMahi’s Stefan Charsley for his prestigious Kim Cameron award enabling him to attend IIW 40 earlier this month, wonderfully reported on by co-founder Phil Windley here. As a regular attendee on DINZ’s Coffee Chats, we hope he’ll be able to share his knowledge and perspectives gained. While on the subject of awards it was great to see MinterEllisonRuddWatts honoured in Best Lawyers.
RNZ reports that the Department of Internal Affairs has issued a tender for a “new genuine face capture solution”, and notably its Trust Framework Authority issued an RFI for digital ID services accreditation infrastructure writes Biometric Update.
Our DINZ community continues to grow! We’re delighted to welcome Cybercure, LuminPDF and just in, OriginID and BeyondTrust.
See all organisation members here.
Policy and Submissions
DINZ’s reputation for thoughtful high quality submissions continues to grow and these two above are no exception. View our submissions here.
International Engagement
We were fortunate to welcome back Zygma’s Richard Wilsher during his brief visit to New Zealand this month – two years since he delivered this highly thoughtful webinar. Richard met with both the DISTF Working Group and the Trust Framework Authority, sharing his valuable international perspectives and expertise.
Ministerial Engagement
As part of a FinTechNZ delegation that included DINZ member Middleware, our outgoing Executive Director Colin Wallis met with Minister of Commerce and Consumer Affairs Scott Simpson. Smooth accessible digital identification is a crucial enabler for fintech innovation and the upcoming Customer Product and Data Act.
International News
International ID Day (9 April) saw attendance from several New Zealand organisations. If you missed it, catch up here. UK Digital Identity Developments: The saga currently playing out in the UK may prove the adage once again that ‘Trust Takes Years To Build, Seconds To Break And Forever To Repair’. First it was Industry stakeholders expressing concerns over the government’s digital wallet approach, which saw the Digital Identity and Attributes Trust Framework (DIATF) community advocating for alternative approaches. Then came Government facing claims of serious cyber security and data protection issues in One Login digital ID system extended by The Telegraph including commentary from DINZ Coffee Chat attendee Mark King.Executive Director Search Update
The DINZ Executive Council is currently interviewing the candidate shortlist with an announcement expected shortly. We appreciate your patience during this transition period.
We value your continued support and engagement as we work together to advance digital trust in Aotearoa New Zealand.
Ngā mihi nui,
The team at Digital Identity NZ
Upcoming events, new members, and industry developments.
Read full news here: Moving Forward Together
The post Moving Forward Together appeared first on Digital Identity New Zealand.
“Cypherpunks wouldn’t just critique the surveillance state—they’d also call out us technologists for enabling it. We were supposed to resist, not retrofit.”
Christopher Allen recently talked with Tereza Bízková in an interview that was published to the front page of Hackernoon. It was headlined “The Co-Writer of TLS Says We’ve Lost the Privacy Plot”. In it, Christopher talks about what privacy means to him, what he thinks about recent privacy efforts, how centralization has become a problem, and how all of this connects to work done by the cypherpunks in the ’90s.
Perhaps most importantly, Christopher answers the question: what would be non-negotiable for a new privacy-first system today? His answer unsurprisingly reflects our vision here at Blockchain Commons, built on data minimization, progressive trust, and limited scale.
Privacy is one of the fundamental Gordian Principles, but probably the one we talk about the least, as so much of our focus is on resilience (such as #SmartCustody) or on independence and openness (as reflected in our attention to interoperability). Read “The Co-Writer of TLS Says We’ve Lost the Privacy Plot” for much more on why privacy is important and what exactly it is!
Describe your service/platform/product and how it’s using FIDO authentication.
Microsoft Account (MSA) powers consumer-facing experiences across services like Xbox, Microsoft 365, Copilot, and more. In 2023, Microsoft began rolling out passkey support across these services, allowing users to sign in with a face, fingerprint, or device PIN instead of a password. By integrating FIDO credentials, we made it easier, faster, and significantly more secure for over a billion users accessing their Microsoft accounts, by removing the need for passwords.
What were the challenges you were trying to overcome?
We set out to solve three major challenges:
Security: Passwords are inherently insecure and highly vulnerable to phishing and brute force attacks. In 2024, we observed more than 7,000 password attacks per second.
User experience: Passwords are frustrating—users forget them, reuse them, or mistype them. We wanted a sign-in experience that users could succeed at the first time, every time.
Adoption at scale: We needed a solution that could work across devices and platforms while meeting high usability expectations for a global user base.
Why did you choose FIDO authentication over other options? What did you identify as advantages of implementing FIDO?
FIDO credentials offer the ideal combination of security, usability, and interoperability. They are resistant to phishing and credential theft, and they eliminate the need for shared secrets like passwords. FIDO credentials also enable seamless cross-device and cross-platform experiences—critical for consumer use cases. In testing, we found that passkeys delivered both improved security and a dramatically better user experience.
Describe your roll out of FIDO authentication.
Microsoft took a phased approach. We started by enabling passkeys for MSA sign-ins across consumer services like Xbox and Copilot. From there, we made UX changes to prioritize passwordless options. New Microsoft Accounts are now passwordless by default, and existing users are guided to enroll a passkey during or after sign-in. Throughout this process, we have worked closely with platform partners like Apple and Google, and continued our long-standing collaboration with the FIDO Alliance to ensure our approach aligns with industry standards. For a more detailed look at our approach, refer to Convincing a billion users to love passkeys: UX design insights from Microsoft to boost adoption and security.
What data points can you share that show the impact FIDO authentication has had?
The impact has been significant:
We now see over one million passkeys registered every day. Users signing in with passkeys are three times more successful (95% success rate vs. 30% for passwords). Passkey sign-ins are eight times faster than traditional password + MFA flows. Our passwordless-preferred UX has already reduced password use by over 20%.These results confirm that FIDO authentication improves security, boosts user satisfaction, and reduces operational burdens like password resets and support calls.
Read more in the Microsoft blog.
Describe your service/platform/product and how it’s using FIDO authentication.
Nikkei Inc. and the Nikkei Group pursues its mission “to be the most trusted, independent provider of quality journalism to a global community, helping our customers make better decisions.” We offer various media services, including the Nikkei, which serves as the cornerstone of our role as a news organization. The integrated ID platform supporting the Nikkei Group’s digital services, including our core service, the Nikkei Online Edition, is “Nikkei ID.”
Nikkei ID, which offers a wide range of services, has long faced the challenge of balancing security and usability. While we have implemented measures such as improving the login experience with OpenID Connect and introducing two-factor authentication and CAPTCHA (*1) to reduce the risk of unauthorized access, addressing security risks associated with password leaks and reuse, as well as countering increasingly sophisticated attacks, has been difficult.
(*1)A security authentication method to verify that a user is human.
In this context, as FIDO authentication has evolved and the threshold for introducing passkeys to services has lowered, Nikkei ID has proceeded with consideration and implementation with high expectations. Currently, we are expanding functionality to support not only web services but also mobile apps, and aiming to promote the adoption of passkeys through increased user awareness via internal and external blog posts, presentations, and guidance at the Nikkei ID Lounge Help Center.
What were the challenges you were trying to overcome?
The primary goal is to balance security and user experience. Many Nikkei ID users are not accustomed to digital services, so simply enhancing security is not enough. For example, while the introduction of CAPTCHA can prevent brute-force password attacks, it can also become a barrier for users who cannot pass the Turing test (*2), leading to increased support inquiries and added burden on customer service.
(*2) A test to determine whether something is ‘human-like’.
However, FIDO authentication (passkeys) achieves high security and user experience through integration with OS and platforms as a standard. This allows us to replace security measures that reduce risks associated with password authentication but negatively impact UX with passkeys.
Why did you choose FIDO authentication over other options? What did you identify as advantages of implementing FIDO?
The following two options were considered as alternatives to FIDO authentication (passkeys):
Mandatory implementation of two-factor authentication such as TOTP or email verification Social login using other ID platformsAs a result of comparing these options, we believe FIDO authentication (passkeys) offers the following advantages:
It allows for gradual transition by adding authentication on top of existing password authentication It enables the use of higher UX authentication methods such as biometric authentication It fundamentally resolves the risks associated with passwordsWhen it came to actual implementation, the aspect of “additional authentication” was particularly significant. In other words, it allows for implementation in a loosely coupled and highly cohesive manner without disrupting the existing ID model. The WebAuthn specification provides simple interface libraries and APIs for both backend and frontend on each platform, making secure implementation easy. Additionally, since existing authentication methods can be retained, the advantage of not significantly increasing support workload was also substantial.
Describe your roll out of FIDO authentication.
We implemented our own solution using the open-source backend library WebAuthn4J for FIDO authentication. We chose WebAuthn4J not only for its clear data model but also because it passed the FIDO2 Test Tools provided by the FIDO Alliance. For the frontend, we developed our own implementation that directly interacts with the WebAuthn API. Additionally, we created a test library to emulate FIDO authentication, enabling 24-hour automated testing as a comprehensive test of these implementations.
The rollout of FIDO authentication (passkeys) was carried out in the following steps:
Internal beta testing to gather feedback and monitor usage White-box and black-box testing by external security companies Public release to all usersWhat data points can you share that show the impact FIDO authentication has had?
Since it was just released in February this year, we cannot provide detailed numbers yet, but thousands of users are already using passkeys. Additionally, we have heard that there have been almost no inquiries about how to use passkeys at the support desk, and we recognize that passkeys are being used smoothly.
Resources
The test library that emulates FIDO authentication, mentioned in the implementation section, is publicly available as Nikkei’s open-source software. You can obtain it from the following https://github.com/Nikkei/nid-webauthn-emulator
For authorization after completing FIDO authentication (passkeys), we use Authlete, an OpenID Connect platform. In this case study, we express our enthusiasm for the introduction of FIDO authentication (passkeys). (At the time of this presentation in 2023, passkeys were still under consideration) https://www.authlete.com/ja/resources/videos/20231212/02/
Technical blog article during the consideration stage of implementation: https://hack.nikkei.com/blog/advent20241221/
Background: VicRoads
VicRoads is the vehicle registration and driver licensing authority in Victoria, Australia. It registers over six million vehicles annually and licenses more than five million drivers.
Operating as a joint venture between the Victorian State Government, Aware Super, Australian Retirement Trust, and Macquarie Asset Management, VicRoads is a critical provider of public services in the state.
Challenge: seamless and cost-effective authentication for government services
VicRoads aims to become Australia’s most trusted digital government service providers by delivering secure, frictionless services to millions of people.
Given the importance of the data that VicRoads holds on behalf of its customers, security has always been a primary consideration.
In the past, to support protection of customer data, VicRoads mandated multi-factor authentication (MFA) for all user accounts via SMS one-time passwords (OTPs) and authenticator apps.
Passkeys leverage biometrics, facial recognition, a PIN or a swipe pattern in the sign-in process. Unlike traditional MFA, passkeys require both the device storing the private key and local authentication, meaning they are both phishing-resistant and cost-effective.
Solution: Corbado provides a no-risk, passkey-first solution with minimal integration effort
VicRoads worked with passkey vendor Corbado, prioritizing a proven approach rather than building a solution from scratch.
Corbado’s deep understanding of both customer experience and the latest authentication technology gave VicRoads confidence that customers would find using passkeys easy.
Corbado also provided in-depth technical guidance on passkey-specific challenges, including browser compatibility, recovery flows and user experience optimizations – further solidifying VicRoads’ confidence.
“We selected Corbado because it could integrate passkey functionality into our existing infrastructure without disruption to our customers and operations”, said Crispin Blackall, Chief Technology Officer, VicRoads.
Implementation: pre-built, passkey-optimized components & SDKs enable quick integration
Corbado Connect seamlessly integrated with VicRoads’ existing infrastructure and CIAM, which is deeply embedded within the organization’s enterprise stack. This passkey enablement was achieved without requiring a migration of user data or authentication methods, ensuring a smooth and efficient transition for millions of users.
By layering passkey functionality on top of VicRoads’ current authentication system, Corbado enabled a frictionless deployment while preserving all existing user credentials. This approach eliminated the disruption and risks often associated with introducing new technology.
To ensure a smooth transition, VicRoads implemented passkeys in a phased rollout, beginning with personal customers. This gradual deployment, supported by Corbado Connect’s rollout controls, enabled VicRoads to monitor performance, address potential issues and optimize the user experience before seamlessly extending passkey authentication to partner and business customers.
Results: customers love passkeys, with up to 80% passkey activation rate in the first weeks
Within the first weeks of deployment, passkey adoption significantly exceeded VicRoads’ expectations. Users embraced the phishing-resistant authentication method, benefiting from a frictionless login experience optimized for speed and security.
The exceptionally high passkey activation rate – peaking at 80% on mobile devices and over 50% across all platforms – led to 30% passkey login rate within the first seven weeks. Uptake continues to rise steadily, translating into measurable operational benefits, including:
Reduced authentication-related support tickets Lower SMS OTP costs Improved user experience and security.So far, VicRoads has successfully rolled out passkeys on its web portal. The next step is to integrate passkeys into its native apps – myVicRoads and myLearners – allowing users to leverage their existing passkeys without additional setup. Ultimately, once passkeys are fully implemented across all digital platforms, VicRoads aims to eliminate passwords entirely, maximizing security and fully embracing a passwordless future.
“Passkeys are easy to use, without compromising on security. We’re excited to give our customers a simpler, more secure way to handle their registration and licensing services,” said Crispin Blackall, Chief Technology Officer, VicRoads.
Opportunity: setting a new standard for government authentication
With one of the largest public sector passkey deployments globally, VicRoads has established itself as a digital leader in authentication modernization for government applications.
Achieving high adoption rates without disruption, VicRoads has proven that large-scale organisations can enhance security and improve user experience simultaneously. This success positions VicRoads as a benchmark for other government agencies looking to modernize their authentication strategies.
“Passkeys represent a paradigm shift in how we authenticate users to digital identity services,” said Andrew Shikiar, Chief Executive Officer of the FIDO Alliance. “VicRoads’ adoption of passkeys showcases how government agencies can leverage this industry-wide innovation to protect people’s data while simplifying access to critical services. This is a significant step towards a more secure and efficient digital future for Victoria and beyond.”
Next Steps: developing next generation authentication
VicRoads’ ongoing partnership with Corbado ensures it remains at the forefront of authentication innovation while maintaining a seamless user experience for its expanding digital service base. A key advantage of Corbado’s managed passkey service is its built-in adoption-enhancing optimisations, ensuring continuous improvements and seamless WebAuthn conformity with all future WebAuthn updates.
With this initiative, VicRoads has paved the way for broader adoption of passkeys in the government and public sector, proving that secure, frictionless authentication at scale is achievable.
About Corbado
Corbado is a leading provider of passkey solutions, enabling enterprises and government agencies to deploy passkey authentication seamlessly, without user migration. Corbado’s focus is on maximizing adoption in large-scale deployments. As a FIDO Alliance member, Corbado’s solutions ensure high adoption rates, enhanced security, and a frictionless user experience. Visit https://www.corbado.com/.
Describe your service/platform/product and how it’s using FIDO authentication.
With over 55 apps across nearly every major business category, Zoho Corporation is one of the world’s most prolific technology companies. Headquartered in Chennai, India, Zoho is privately held and profitable, employing more than 18,000 people worldwide. Zoho is committed to user privacy and does not rely on an ad-revenue business model. The company owns and operates its data centres, providing full oversight of customer data privacy and security. Over 100 million users globally—across hundreds of thousands of companies—trust Zoho to run their businesses, including Zoho itself. For more information, visit zoho.com.
What were the challenges you were trying to overcome?
Secure and easy log in instead of traditional authentication methods.
Why did you choose FIDO authentication over other options? What did you identify as advantages of implementing FIDO?
Improved security, supporting documents and community.
Describe your roll out of FIDO authentication.
We rolled it ourselves via our IAM team.
We first rolled out passkey authentication for zoho.com (100mn + users)
Rolling out passkey management in our password manager Zoho Vault in May, 2025
What data points can you share that show the impact FIDO authentication has had?
30% increase MoM in passkey adoption
10% drop in password reset queries
Resources
https://help.zoho.com/portal/en/kb/accounts/sign-in-za/articles/passkey https://www.zoho.com/vault/features/passkeys.htmlDescribe your service/platform/product and how it’s using FIDO authentication.
Samsung Electronics’ Galaxy smartphones support fast and convenient logins through biometric authentication and FIDO protocols.
What were the challenges you were trying to overcome?
FIDO-based passkeys are transforming the way users access websites and apps by eliminating the need for traditional usernames and passwords. Instead of being stored on a server where they could be exposed, passkeys are securely stored on the Galaxy device, enabling quick and secure sign-ins using biometric authentication.
Why did you choose FIDO authentication over other options? What did you identify as advantages of implementing FIDO?
FIDO enables secure authentication without transmitting users’ biometric data outside the device. Its ease of use, speed, compatibility across services, and status as an industry standard made it a compelling choice for Samsung Electronics.
Describe your roll out of FIDO authentication.
We have integrated FIDO authentication directly into our devices, enabling users to access it out-of-the-box. We continue to expand FIDO support across more Galaxy models and software updates.
Resources
https://www.samsung.com/levant/support/apps-services/how-to-create-and-use-a-passkey/ https://www.samsung.com/ca/support/apps-services/how-to-create-and-use-a-passkey/ https://news.samsung.com/in/the-knox-journals-the-passwordless-future-of-securityDescribe your service/platform/product and how it’s using FIDO authentication.
Our mobile banking app is our bank’s largest branch, serving over 1,200,000 customers each month. These customers require the best protection against identity theft attacks, and we provide the most robust and innovative solutions, always prioritizing the best user experience. ABANCA Key is a new identity verification service based on FIDO standards. It was launched after years of research by leading players to prevent identity theft attacks. Using passkeys, ABANCA Key provides the highest level of protection. It is impossible to guess or reuse them, so they protect our customers’ private information from attackers.
What were the challenges you were trying to overcome?
On one hand, there’s the security challenge. The rise of phishing through calls and SMS messages in Spain has become a plague and a real problem for administrations, mobile operators, and financial institutions but on the other hand, there was the need to maintain the best user experience with the least friction. Passkeys give us a framework for interoperability and standardization, which provides us with ease of implementation and deployment. However, above all, and for the first time in the security industry, it provides a framework of homogeneity to achieve a frictionless user experience.
Why did you choose FIDO authentication over other options? What did you identify as advantages of implementing FIDO?
We chose FIDO for many reasons: for its future strategy, as it allows us to follow many ways, including MFA and passwordless, for trust, as the FIDO Alliance includes leading players in security, operating systems, infrastructure, and mobile ecosystem; and for the standardization and homogenization what it give us, which reduces implementation, deployment and roll out times.
Describe your roll out of FIDO authentication.
To deliver the best user experience, we’re committed to having the deepest possible understanding of the technology. This enables us to effectively identify and resolve issues, and to better understand user behavior. We became our own partner by developing our own platform based on the FIDO standard and certifying it as if it were a provider.
We rolled out the deployment in phases. In under five months, we had the development of both server and front-end (iOS and Android) ready, and we began the rollout in an initial phase with our employees, and subsequently to end customers in batches. In just seven months, we were already in a general roll out to all customers.
What data points can you share that show the impact FIDO authentication has had?
More than 42% of our customers are already using ABANCA Key More than 11,000,000 high-risk transactions has been protected with ABANCA Key without technical or service incidents Customer roll out ran without technical or service incidents, and most importantly, with our customer journey UX to sign in ABANCA Key and to use it, we’ve managed a Customer Effort Score (CES) of 4.7.Please provide any links or resources that you feel would be useful in developing this case study.
https://comunicacion.abanca.com/es/noticias/abanca-primer-banco-espanol-en-implantar-la-tecnologia-de-llaves-de-acceso-en-la-banca-movil-para-reforzar-la-seguridad-de-sus-clientes/ https://www.abanca.com/es/banca-a-distancia/llave-abanca/
Out of the blue, I received a text from my father asking me, “What’s the difference between a password and a passkey?”
Somewhere, in his daily online journey, he was prompted by a website or application — a “relying party” in authentication lingo — to create a passkey. But the benefit wasn’t clear to him. Nor did there seem to be any urgency. He figured I’d know what passkeys are and what to do the next time he gets a nudge to set one up. I told him, “Let’s talk before you do anything.”
Your phone, computer and tablet is now at risk, as the nightmare of AI-powered attacks comes true. There are now multiple warnings into the use of mainstream AI platforms to design, develop and even execute attacks that are almost impossible to detect.
To add to recent reports from Symantec and Cofense, Guardio also now warns that “with the rise of Generative AI, even total beginners can now launch sophisticated phishing scams — no coding skills needed. Just a few prompts and a few minutes.”
And Microsoft has just told users the same. “AI has started to lower the technical bar for fraud and cybercrime actors looking for their own productivity tools, making it easier and cheaper to generate believable content for cyberattacks at an increasingly rapid rate… AI-powered fraud attacks are happening globally.”
Apple’s new Passwords app (introduced with iOS 18, iPadOS 18, and macOS Sequoia) is a big leap forward in making password management simple and user-friendly for Apple users, even if it’s not as robust as other password managers. If you’ve ever fumbled through Safari settings to find a saved login or toggled through iCloud Keychain menus to edit credentials, the Passwords app is for you. It’s designed to give you a dedicated home for all your saved login credentials, passkeys, Wi-Fi passwords and two-factor authentication codes, all in one secure, easy-to-navigate interface.
ABANCA has achieved international FIDO certification for the Llave ABANCA service, the digital identity verification technology solution developed by the bank that allows customers to validate mobile banking transactions securely and quickly by unlocking their phone.
The FIDO ( Fast IDentity Online ) certification has been granted by the FIDO Alliance, the leading international association promoting digital authentication standards that are more robust and simpler than passwords or one-time codes. This consortium is made up of the world’s leading technology companies—Google, Apple, Microsoft, Amazon, and Visa, among others—who have joined forces to promote more robust and user-friendly digital identity methods. With the FIDO certification of Llave ABANCA, this global alliance accredits that the solution implemented by the bank meets the highest standards of online identity verification.
Poznań, Porto, Bruges, Budapest—Today, the European community of Linux Foundation Decentralized Trust announced the launch of its European Chapter. Our goal is to create a vibrant and inclusive community that contributes to LF Decentralized Trust projects. We aim to serve as the regional center for LF Decentralized Trust project development by offering mentorship, support, and a platform for promoting decentralized technologies, including distributed ledger technologies, tokenization, stablecoins, and digital identity.
Going passwordless is difficult for a lot of companies, even the ones with “security” in the name.
Jim Taylor, chief product and technology officer (and resident IT professional) at RSA Security, spoke with IT Brew about lessons learned as he led the deployment of passkeys, biometrics, and other non-password implementations across the organization. Two major keys to passwordless success, he said, included having lots of options and lots of patience.
“There’s no big switch. I wish there was a big red button that you could just press and go, ‘Ta-da!’ with passwordless, right? It doesn’t work like that,” Taylor told IT Brew.
Inverid has joined the FIDO Alliance. A release from the Dutch identity verification firm says it will bring expertise in document authenticity verification to FIDO users of DocAuth document authenticity specifications.
We’re excited to welcome Oír Más as one of our new Matchbox partners. Oír Más is a radio collective dedicated to amplifying the voices of diverse communities through radio experimentation.
The post From memory to action: Radio as a tool for a healthier information ecosystem appeared first on The Engine Room.
Die Einführung der EUDI-Wallet in Deutschland, basierend auf der novellierten eIDAS-Verordnung, stellt eine zentrale Herausforderung dar, um bis 2026 eine sichere, digitale Identitätslösung für Bürger bereitzustellen. Ziel ist es, ein umfassendes Ökosystem zu schaffen, das öffentliche Institutionen, privatwirtschaftliche Akteure und Regulierungsbehörden integriert. Dieses Ökosystem soll nicht nur technische Standards und Prozesse etablieren, sondern auch die Akzeptanz der Bürger fördern und die rechtlichen Rahmenbedingungen gewährleisten.
Die Autoren dieses Whitepapers schlagen eine Public-Private-Partnerschaft in Form einer Genossenschaft als neutralen Ecosystem-Orchestrator vor. Diese Genossenschaft soll:
Demokratische Governance bieten, mit transparenter Kontrolle und Einflussmöglichkeiten für alle Mitglieder. Kostendeckendes Finanzierungsmodell durch Mitgliedsbeiträge und Sonderzuwendungen implementieren. Zentrale Aufgaben wie Zertifizierung von Wallets, Registrierung von Verifizierungsstellen und Förderung der Kommunikation zwischen Akteuren übernehmen. Herausforderungen für die neue BundesregierungDie Bundesregierung ist durch die Verordnung [(EU) 2024/1183] dazu verpflichtet, bis Ende 2026 eine EUDI-Wallet für alle Bürger bereitzustellen, in der Ausweisdaten in elektronischer Form sicher gespeichert werden können. Die vollständige Umsetzung des EUDI-Wallet Ökosystem hat bis 2027 zu erfolgen. Das Bundesinnenministerium bereitet seit Mai 2023 die Konzeptionierung und Umsetzung der deutschen Wallet-Lösung federführend vor. Um die Akzeptanz bei den Bürgerinnen und Bürgern zu fördern, ist es nicht nur erforderlich, die technische Infrastruktur bereitzustellen, sondern auch ein umfassendes Ökosystem aufzubauen. In diesem Ökosystem interagieren privatwirtschaftliche Akzeptanzstellen, zertifizierte Vertrauensdienste, öffentliche Einrichtungen aus Bund und Ländern sowie Regulierungsbehörden und Zertifizierungsstellen miteinander. Der Aufbau eines solchen Ökosystems stellt eine hochkomplexe Aufgabe dar. Sie erfordert die Etablierung von Standards und Prozessen, die von allen beteiligten Privatunternehmen und öffentlichen Institutionen geteilt und umgesetzt werden können. Dabei steht die Berücksichtigung der Interessen der Bürgerinnen und Bürger im Vordergrund. Zudem müssen alle rechtlichen Rahmenbedingungen eingehalten, die missbräuchliche Nutzung persönlicher Daten verhindert und ein attraktives Investitionsumfeld für die Unternehmen der Privatwirtschaft geschaffen werden.
Aufbau des EUDI-Wallet Ökosystems als Public-Private Partnership mit neutraler GovernanceDie Autoren dieses Whitepapers plädieren für eine Public-Private-Partnerschaft als Ecosystem Orchestrator, die sich in einer Genossenschaft organisiert. Die Genossenschaft hat den Zweck, die Interessen aller Genossenschaftsmitglieder – also aller Akteure im EUDI-Wallet Ökosystem – gleichermaßen zu fördern. Die Genossenschaft benötigt eine demokratische Governance, die Einfluss, Kontrolle und Transparenz ggü. den Mitgliedern ermöglicht. Sofern die Genossenschaft selbst keine Investitionen tätigt, sollte ihre Satzung eine not-for-profit Klausel beinhalten, sodass lediglich die Kosten gedeckt werden müssen, die sich am Bedarf der Mitglieder orientieren.
Die Genossenschaft sollte durch die nachstehenden Organe gesteuert werden.
Generalversammlung: Bildet das höchste Entscheidungsgremium der Genossenschaft. Jedes Mitglied erhält ein Stimmrecht. Die Summe der Stimmen von Nicht-Europäischen bzw. investierenden Mitgliedern darf 25% nicht überschreiten. Vorstand: Führt das Tagesgeschäft und übernimmt die Verantwortung für die Umsetzung der Governance und der Strategie. Aufsichtsrat: Bildet das Kontrollorgan des Vorstands. Ecosystem Coach: Stabstelle als Mediator zwischen den Akteuren zur Offenlegung und Management von Interessenskonflikten.Öffentliche Institutionen sollten die Genossenschaft gründen, ein Regelwerk in Form einer Satzung definieren und den Beitritt von Akteuren der Privatwirtschaft ermöglichen. Es sind dabei sämtliche Akteure der Rollen aus der eIDAS-Novelle wie bspw. PID-Provider, Pub-EAA-Provider, (Qualified) Trust Service Provider, Wallet Provider und Trust List Provider einzubinden.
Finanzierungskonzept: DieFinanzierung der Betriebskosten erfolgt durch Mitgliedsbeiträge. Werden temporärMittel für bestimmte Zwecke benötigt, kann eine Finanzierung in Form von Sonderzuwendungen der Mitglieder erfolgen.
Aufgaben: Interessen der Mitglieder fördern, Kommunikation zwischen den Mitgliedern erleichtern, Wallets zertifizieren, Verifizierungsstellen registrieren, rechtliche Fragestellungen klären, öffentliche Kommunikation bündeln, Durchdringung der EUDI-Wallet messbar machen, Schnittstellen bereitstellen, etc.
Opt-Out: Die Mitgliedschaft kann ordentlich gekündigt werden. Erkenntnisse aus einer Risikoanalyse zu Opt-Out Szenarien sollten bei der Erstellung der Satzung berücksichtigt werden.
FazitUm das Ökosystem für die deutsche EUDI-Wallet aufzubauen ist ein neutraler Ecosystem-Orchestrator erforderlich, der gemeinsam mit öffentlichen und privatwirtschaftlichen Organisationen ein Ökosystem für die deutsche EUDI-Wallet aufbaut. Dies kann, wie vorliegend aufgezeigt, durch eine Genossenschaft abgebildet werden, in der alle Akteure des Ökosystems kartellrechtskonform und strukturiert kooperieren.
Über die AutorenDie IDunion SCE mbH ist die Nachfolgeorganisation eines vom BMWK-geförderten Konsortiums im Programm „Schaufenster sichere digitale Identitäten“. Die Europäische Genossenschaft wurde durch ihre Mitgliedsunternehmen mit dem Betrieb eines digitalen Vertrauensankers beauftragt.
Over the past three years, a dedicated research team from IDunion has been working intensively on the Digital Product Passport (DPP).
In the last two years, our focus shifted from theory ( a joint publication with key partners [→ read the paper here]) to hands-on implementation resulting in a Demo – DPP solution that is:
Data Carrier Agnostic: The product link can be embedded into any data carrier listet in Landscape of Digital Product Passport Standards | StandICT.eu 2026. Product Agnostic: Our DPP works with any type of product, whether batch-based, lot-based or individual instances. Automatic Identification of the Requester: The solution aligns with #eIDAS2.0 by supporting decentralized identification of individuals and organizations. Technology Stack Independent: The DPP can be accessed without downloading a additional app and lowers the threshold for adoption and enhances usability. Lightweight & Semantically Rich: Information is delivered through Verifiable Credentials (VCs) and semantic data schemas, ensuring seamless interoperability. Additionally, we use W3C Decentralized Identifiers (DIDs) to guarantee unique, globally resolvable product IDs. Cryptographically Verifiable: All data within the DPP is cryptographically signed, enabling validation of authenticity and trust without central intermediaries. Trust Infrastructure Support: Establishing trust over the data and integration with Digital Identity Trust Anchors is possible, ex by using the eIDAS 2.0 mechanism and the EU Trust infrastructure proposed for natural and legal entities. Demonstration: Digital Product Passport System for BatteryOur final demo phase featured a live demonstration of a DPP tied to a battery cell used in an electric scooter.
The goal was to simulate how a real-world DPP can accompany a product through its entire lifecycle — from raw material extraction to end-of-life recycling.
We explored the perspectives and requirements of key stakeholders, including: Miners, Battery pack manufacturers, Economic operators, Consumer, Recyclers and Government and public interest organizations
Through interactive demonstrations, we answered key questions in real time, illustrating how a decentralized, interoperable, and trustworthy DPP can benefit every actor along the value chain.
For more detailed insights and demo material, check out the links below:
German: https://idunion.org/piloten/sichere-digitale-identitaeten-fuer-produkte/ English: https://idunion.org/piloten/en/sichere-digitale-identitaeten-fuer-produkte/ A Huge Thank You to Our Partnershis project would not have been possible without the creativity, technical expertise, and dedication of our fantastic partners.
Dr. Andreas Füßler (GS1 Germany), Florin Coptil ( Robert Bosch GmbH), Werner Folkendt (Robert Bosch GmbH), Dominic Hurni (SBB), Cornelia Schalch (SBB), Johannes Ebert (Spherity GmbH) , Sebastian Schmittner (European EPC Competence Center GmbH) , Christian Fries (European EPC Competence Center GmbH), Paulina Drott (GS1 Germany), Dr. Susanne Guth-Orlowski, Ralph Troeger (GS1 Germany), Roman Winter (GS1 Germany)
Thank you for your ongoing commitment to shaping the future of product transparency and digital trust.
What’s Next?
With all project goals achieved and deliverables submitted, the DPP demo project is now officially closed.
We’re incredibly proud of what we’ve accomplished and are eager to apply these insights to future initiatives that further the digitization of sustainable and circular value chains.
How accurate is your inventory?
For many retailers, answering that question is more difficult than it appears—especially when inventory counts are only conducted once or twice a year.
In this episode, Dean Frew, President, RFID Solutions Division, SML IIS, sits down with hosts Reid Jackson and Liz Sertl to explore how RFID solves one of retail’s biggest challenges. Dean shares how real-time data at the item level drives results from return fraud to buying online and pick up in store.
He also discusses what it takes to make RFID work at scale, how adoption has changed post-COVID, and why distribution centers are the next frontier.
In this episode, you’ll learn:
How RFID improves inventory accuracy
Where retailers are seeing the most significant ROI
New use cases beyond the sales floor
Jump into the conversation:
(00:00) Introducing Next Level Supply Chain
(02:20) Dean’s background and RFID journey
(06:35) Improving inventory accuracy with RFID
(12:13) Reducing returns fraud with item-level data
(18:07) How BOPIS impacts inventory and sales
(23:01) Boosting inbound accuracy at distribution centers
(26:14) RFID in checkout and fitting room experiences
Connect with GS1 US:
Our website - www.gs1us.org
Connect with the guest:
Dean Frew on LinkedInCheck out SML Group
The Limits of InteropIn early 2025, Blockchain Commons architected and engineered a major project for the Zcash blockchain: the ZeWIF specification that allows all of its wallets to interoperate.
Interoperability is something that I consider vitally important for a technological ecosystem, so I was thrilled that Blockchain Commons could improve interoperability for Zcash. Here’s a bit more on why, what Blockchain Commons did for Zcash, and what I’d like to do for other technological communities.
Obviously, some level of interop is necessary for any ecosystem, or it just wouldn’t work. There was a white paper and there are BIPs for Bitcoin; without them, no one could agree on how the blockchain works. We similarly have specifications, RFCs, and standards for everything from email to graphics file formats.
Unfortunately, there are limits to interop. Standards tend to cover the things that are required for the broadest level of intercommunication within an ecosystem. But when companies begin to work on their own apps, their work often turns proprietary—sometimes aggressively so—and that limits a technological ecosystem’s ability to continue to grow.
As an example, email has standards such as RFC 5322 for how mail servers communicate with each other. When mailers begin storing their mail, things become somewhat more balkanized: you might use a classic mbox
format or a single-message eml
format or a proprietary format such as Microsoft Outlook’s msg
. Finally, there’s no consistency at all for how mailers might algorithmically filter messages: it’s hard to control whether a message ends up in the “Important”, “Everything Else”, or even “Spam” category of Gmail, even if a mailer adopts newer standards such as DKIM, DMARC, and SPF.
Yet, all of these types of interactions are important. If a mail ends up in a Spam folder, that’s almost as bad as it not being delivered at all; if mail can’t be restored from a proprietary mailbox, it’s lost as well. That’s why interop is important.
The Power of InteropI developed four core architectural fundamentals for Blockchain Commons that I call the Gordian Principles. Of them, Privacy is only peripherally related to the question of interoperability, but the other three highlight why interop is important for the health of any technological ecosystem.
Independence is a user-focused principle. It says that users should be free from external control. This is the heart of my ideas of self-sovereign identity, and it’s the heart of interoperability too. With the ability to export or exchange data in an interoperable way, users don’t get locked into a single platform. Instead, they can choose among a variety of options to find an application, service, or pricing structure that meets their precise needs. Resilience is a data-focused principle. It says that data should be protected from loss. When data output and exchange formats are standardized for interoperability, they become widely used and widely understood. That ensures that they can be recovered far into the future. If you’ve ever tried to recover an old ClarisWorks or WordStar file, you’ve seen how much a non-interoperated file format can damage resilience. In comparison,.rtf
files will never be lost and even .docx
is pretty good—both because Microsoft Word has an enormous install base and because it’s an XML-based output format.
Openness is a creator-focused principle. It says that infrastructure should be open so that new creators can join the ecosystem. When communication is standardized, anyone can develop in accordance with the standards, including newcomers. But, this isn’t just beneficial to creators, it’s also beneficial to users. Without new entrants, an ecosystem can become staid and stagnant. With new entrants, an ecosystem is constantly innovating and expanding. It creates an atmosphere of coopetition: members of the ecosystem cooperate to interoperate, but then compete to offer the best products within those standards.
In other words, interoperability is beneficial for the vast majority of members of the ecosystem: users get more options, more innovations, and freedom of choice; creators get the ability to fairly participate and compete; and data gets strong protections far into the future.
What We Did for ZcashI was thrilled when members of the Zcash community started talking with me about their needs, because it showed a strong, ecosystem-wide understanding of the need for inteoperability.
But there was also a strong inciting incident that led to the Zcash work: the older zcashd server was being deprecated and so there was a need to migrate digital-asset data from its wallet to others. This also reflected a longer-term issue. Digital assets had sometimes been lost during previous migrations due to differences or even bugs in different wallets.
Obviously, a one-off migrator could have been created, likely linking zcashd with its replacement wallet, zallet. But I approached things from an architectural point of view: I wanted to create an extensible Wallet Interchange Format (that’s the “eWIF” in “ZeWIF”) that would not just enable the zcashd migration, but also allow any migration from one wallet to another within the Zcash ecosystem. I wanted to take the immediate needs and political will and turn it into something that could benefit the community for years to come.
That was the ZeWIF proposal that we put forth. It called for one month of studying Zcash wallet data as it currently exists, then another two months of developing the spec and writing libraries to convert among different wallets using that spec. (That timeline turned out to be a bit ambitious, but we’re closing out the initial design with a fourth month of work.)
As the above diagram shows, the
zewif
Rust crate lies at the center of the ZeWIF system. It creates in-memory representations of data from a variety of inputs and can output that abstracted data in a numbers of forms. Obviously, it can accept input from ZeWIF files and it can output to ZeWIF files. However, that’s just part of the process. Individual developers can also choose to use create front ends that import data from their wallets tozewif
and back ends that export the data fromzewif
to their wallets.
As we release ZeWIF into the ecosystem, we should see advancements in accordance with the Gordian principles:
Independence. Users will be able to move their funds easily among Zcash wallets. Resilience. Translation of keys and seeds should be more reliable, and if conversion is done using our best practices, there should be clear warnings if anything wansn’t converted. Openness. New wallets can join the ecosystem. If they have innovative features, they can easily pick up new users if they support ZeWIF as an import format.I’ve been thrilled to have strong support from wallet makers in the Zcash community. That type of buy-in is required to make interoperability work. Zingo Labs, the makers of the Zingo wallet, introduced us to the opportunity and have worked closely with us to fulfill our vision. Meanwhile, ECC has been our next testbed, since they’re using ZeWIF to manage conversions between zcashd and zallet. Other principals have been involved as well, and just as importantly we didn’t receive any pushback from anyone who refused to use the format. (Not everyone has adopted it yet, but we’re just finalizing the complete ZeWIF draft at this point.)
We were fortunate, because that’s not always the case.
What We Did for BitcoinThis isn’t Blockchain Commons’ first rodeo. Creating interoperability has been Blockchain Commons’ goal since the start, and we’ve done most of our interop work to date with Bitcoin.
Our two biggest successes for Bitcoin have been Animated QRs and SSKR. Animated QRs are a standardized way to move large files across airgaps. That’s the exact sort of intercommunication that has always required interoperability. SSKR is a standardized way to shard a secret, currently focused on Shamir’s Secret Sharing. Because it isn’t just about intercommunication, getting a variety of companies to use it was a bigger victory, because it ensures those secrets will remain accessible and resilient into the far future. Both technologies are integrated with our Uniform Resources, which have been implemented by more than a dozen companies, offering true interoperability.
But these successes have unfortunately been piecemeal. There’s just one company that I’m aware of that’s adopted a pretty wide swath of Blockchain Commons’ Gordian specifications, and that’s our long-time sponsor, Foundation. We most recently worked with them to support QuantumLink, a Post-Quantum-Cryptopgraphy (PQC) method of Bluetooth communication that’s in their new Passport Prime device, but they’ve also implemented URs, Animated QRs, SSKR, and other Blockchain Commons interop specs. As a result, they’ve got well-studied, mature specifications that they didn’t roll themselves and that should be resilient and reliable far into the future. I think that adding in a variety of linked interop specs like this has a multiplicative effect.
I’d love to see more of this in the Bitcoin community, but a lot of people are resistant.
Why People Fight InteroperabilityThe primary reason that we see people fight interoperability is market dominance. The Bitcoin ecosystem has grown large enough that some of the bigger players have stepped back from inteoperability
I was sad to see ColdCard go this way, after they themselves built on Trezor and other open-source libraries. At least they’ve remained source-verifiable (meaning you can view their code in their repo), at least until you get now to the proprietary chip, but they were once one of maybe three hardware wallets that were fully open-source,and so fully interoperable.
But I think the recent release of Ledger Recover was even more of a tragedy. Here they were offering a big innovation: a way to recover seeds by splitting them up and distributing them off device, similar to Blockchain Commons’ own Collaborative Seed Recovery (CSR). But by keeping their protocol for distributing and recovering seed shares non-interoperable, they kept anyone else from offering seed vaults of their own, instead locking their users into their choices—which were very unpopular due to privacy-busting requirements for KYC information.
The exact opposite approach is taken by another of Blockchain Commons’ long-time developer partners, Craig Raw of the Sparrow wallet. He’s working hard to make Sparrow compatible with everything out there, but the difficulty he faces underlines the issues with the semi-interoperable state of most blockchains. He has to make NASCAR-like lists of otherwise incompatible products and introduce secret sauce to interoperate with each of them. We’re very luck to have the Sparrow wallet working with all of these different devices, but it’s something that would never happen if there weren’t someone as dedicated as Craig working on the project.
For smaller companies, interoperability is a way in. Obviously, you should do it!
For bigger companies, interoperability means both trusting your engineers to provide the best experience and trusting your customers to recognize it. That’s a leap of faith, but one that I’d hope to see most companies make in our industry. After all the idea of personal control is likely one of the reasons that your customers are working with digital assets in the first place!
The Potential for Other BlockchainsI hope that our work with Zcash (and before that with Bitcoin) is just a first step. I’d like to take that experience to other blockchains and offer new interoperability to grow those ecosystems as well.
Even after the work we’ve already done, Bitcoin may still need this sort of work the most, because it’s gotten so big. How could we make migration between wallets easier? Or just the migration of seeds or keys? How could we more widely standardize the backup of seeds with a methodology like Ledger Recover or our own CSR? How could we inteoperate third-party services such as pricing and fee lookups? Every one of these elements of interoperability would improve the ecosystem, but they require a dedication to inteoperability itself.
The same is true for other ecosystems that have gotten large enough to see multiple companies working on projects. Big changes could be the incentive for this, such as Monero’s move to Seraphis. But even without big changes on the horizon, big ecosystems grow to the point where interoperability becomes a requirement: Ethereum has a huge infrastructure built around WalletConnect, but we’ve talked with people in the ecosystem who think there’s real room for improvement. I hope that many chains (and other ecosystems) beyond Monero and Ethereum will see the advantages of improving interoperability for all the reasons I’ve laid out here, particular independence for users, resilience for data, and openness for developers.
Are you a leader working in a digital-asset ecosystem? Would you like to work with me to take lessons learned from the Zcash project to create interoperable wallet formats, data-exchange formats, service formats, or something else? Drop me a line at team@blockchaincommons.com. I’d love to talk about how we can expand your ecosystem as well.
On April 10, EdgeCon Spring 2025 brought together educators, technologists, and institutional leaders at Seton Hall University to explore the evolving landscape of digital teaching and learning. This premier event focused on the powerful intersection of EdTech, innovative pedagogy, and the administrative technologies that support institutional performance and drive student success. With forward-thinking sessions and collaborative discussions, attendees gave the event an overall 4.7 favorable rating (out of 5) and gained fresh insights into how technology continues to shape the future of education.
Connecting with Graduate Students in the Digital Learning EraThe day’s events kicked off with the breakout session, Connecting with Graduate Students in the Digital Learning Era, presented by Georgian Court University’s Denise Furlong, Assistant Professor, and Janine Ataide, Educator and Graduate Student. As more graduate programs provide online options to meet the needs of working adults, universities must consider different ways to engage these students as valued members of the community even though they may never see the physical campus. Some students report that they feel a sense of belonging and strong collaboration within their program and others feel a sense of isolation in their learning experience. This session explored the different aspects of programs and courses that are important in engaging and empowering graduate students as truly part of the university community, as well as how course design, faculty collaboration, and peer connections can contribute to fostering a sense of belonging among online graduate students.
Navigating the Use of Generative AI in Online ClassroomsAs generative AI tools become increasingly prevalent, educators face unique challenges and opportunities in the online classroom. Carol Smith-Cuevas, Associate Director, Learning Engagement & Development, Kean University, discussed effective strategies for responding to students’ use of generative AI in Navigating the Use of Generative AI in Online Classrooms. Attendees explored this topic from a professor’s perspective and learned practical approaches to integrating AI tools into coursework, setting clear guidelines, and promoting ethical use. Smith-Cuevas also explained different ways to design assignments that encourage critical thinking and originality, how to utilize AI detection tools, and resources that can help students to understand the implications of AI-generated content. Attendees left equipped with actionable practices to create a fair, engaging, and academically rigorous online classroom.
Integrating Free AI Training and Micro-Credentials into Learning EnvironmentsProfessors John Shannon, Seton Hall University, and Susan O’Sullivan-Gavin, Rider University, led the session, Unlocking the Future of Learning: Integrating Free AI Training & Micro-Credentials into Learning Environments, and explored how educators can seamlessly integrate free, high-quality AI training programs and micro-credentialing opportunities from platforms like MIT OpenCourseWare, Harvard Online, LinkedIn Learning, Google, and others into their courses. Attendees learned how to identify and incorporate free AI training and certification programs that align with course objectives, and strategies for scaffolding these resources into curriculum design to enhance student engagement and skill development. Shannon and O’Sullivan-Gavin also shared helpful methods for leveraging micro-credentials to increase student employability and lifelong learning pathways.
Demystifying AI LiteracyAttendees joined NJIT/NJII’s Learning & Development Initiative to explore critical AI literacy standards for education and beyond. Led by Stefanie Toye, Project Manager, NJIT & NJII, and Dr. Teresa Keeler, Project Manager at The Learning & Development Initiative, NJIT, this presentation shared why these standards are essential, who needs them, and how to implement them effectively. They shared specific ideas about how these standards can be constructed and adopted, and why AI literacy is crucial for K-12 and higher ed educators and administrators.
Advancing Accessible Digital Education at Hudson County Community CollegeCallie Martin, Senior Instructional Designer, Hudson County Community College, and Joshua M. Gaul, Ed.D., Associate Vice President & Chief Digital Learning Officer, Edge, led the presentation, Building an Inclusive Future: Advancing Accessible Digital Education at Hudson County Community College (HCCC). This breakout session highlighted HCCC’s collaborative efforts with Edge to build a more inclusive and accessible digital learning environment. Through this partnership, HCCC has embraced Universal Design for Learning (UDL) principles to remove barriers and create equitable educational experiences for all students. The session showcased how the college, supported by Edge’s expertise and resources, has implemented student-centered strategies that prioritize accessibility, engagement, and success. Attendees will learn practical approaches to designing inclusive courses and hear firsthand how institutional collaboration can drive meaningful change in digital education.
“I enjoyed the morning presentation that I attended and the Panel Session. NJEdge always does such a great job with these conferences!” Carol Smith-CuevasThe day continued with the second round of breakout sessions, including The Digital and Data-Driven Center for Teaching and Learning led by John Baldino, OFS, Director, Center for Teaching and Learning, Lackawanna College. He shared best practices for using communication technology, digital content and marketing platforms, artificial intelligence, and collected data to create a sustainable Center for Teaching and Learning model that can help empower CTLs to support faculty in their academic success.
Crafting Collaborative ConversationsCuriosity Catalysts: Crafting Collaborative Conversations, led by Adelphi University’s Karen Kolb, Director, Faculty Center for Professional Excellence, Jennifer Southard, Instructional Designer, and Marilena Orfanos, Instructional Designer, examined research-backed, low-effort strategies that enhance online discussions and collaboration without adding to instructor workload. Participants learned how to design thought-provoking prompts that ignite curiosity, integrate AI and digital tools to support interaction, and implement alternative discussion formats—such as multimedia responses and collaborative annotation—to foster deeper engagement. This session also covered effective ways to maintain instructor presence without micromanaging, and gave attendees ready-to-use techniques to create more engaging, inclusive, and meaningful online discussions.
Supporting Persistence in STEM LearnersNatalya Voloshchuk, Assistant Teaching Professor, and Karen Harris, Senior Instructional Designer and Assessment Specialist, from Rutgers University led the session A Learning Outcomes Strategy for Supporting Persistence in STEM Learners to demonstrate how assessment and feedback can aid in developing self-directed learners by examining an implementation of a deliberate data collection Rubric. They discussed how the Rubric is used to help students identify their strengths alongside the areas to work on toward improvement, while also offering the instructor valuable insights for supporting persistence in STEM courses. Harris and Voloshchuk shared some course-level learning data and reflected on how it might impact learning across a STEM curriculum. They also considered how incorporating learning outcomes in rubrics can help students make connections to work skills they are developing that will support them in internships and research settings.
The Implications of AI for Next-Generation EducationSteven D’Agustino, Senior Director for Online Programs, Fordham University, joined EdgeCon to explore the transformative impact of AI on education. He began by defining key AI terms such as Machine Learning (ML), Natural Language Processing (NLP), and Neural Networks, and discussed the growing use of AI tools like ChatGPT among students for academic tasks. The presentation highlighted the challenges and opportunities of integrating technology into education, categorizing adoption methods as intentional, accidental, or unilateral, and how the COVID-19 pandemic accelerated unilateral adoption due to necessity. D’Agustino raised concerns about AI’s potential to depersonalize learning, leading to cognitive de-skilling, and creating a sense of futility among students and educators. However, he also outlined four essential tasks for educators in the AI era: curating, contextualizing, creating, and collaborating. The presentation underscored the importance of equity in AI integration and how educators must focus on humanistic ends to ensure that AI is used deliberately and equitably to benefit all students.
A Blueprint for Excellence in Short Course Development and DesignIn the session, Beyond the Badge: A Blueprint for Excellence in Short Course Development and Design, Molloy University’s Dr. Amy Gaimaro, Dean of Innovative Delivery Methods, and Susan Watters, Associate Director of Blended and Online Learning, examined the blueprint development of high-quality short courses while embedding quality course design badges. Participants learned how to identify quality criteria for designing courses, establish clear learning objectives and outcomes that align with industry standards and learner needs, and how to address regular and substantive interactions in online asynchronous short courses.
This session was ideal for instructional designers, online administrators, training managers, and educators who were looking to enhance the quality and impact of their short course offerings. Attendees gained strategies for incorporating quality assurance throughout the course development lifecycle, best practices for designing and implementing effective quality badge systems, and actionable steps to improve learner engagement and course effectiveness.
Visions for Online Learning: Evolving Strategies and Institutional GrowthAs online learning continues to evolve, institutions must adapt their strategies to meet shifting student expectations, market demands, and technological advancements. EdgeCon panel discussion, Visions for Online Learning: Evolving Strategies & Institutional Growth, brought together higher education leaders to explore innovative approaches to online learning that drive institutional growth and long-term success. Panelists, Michael Ciocco, Ph.D., Associate Vice President of Online Learning, Rowan University, John F. O’Callaghan, Jr., Vice President for Transformational Learning & Chief Online Officer, Kean University, and Joshua M. Gaul, Ed.D., AVP & Chief Digital Learning Officer, Edge, discussed emerging trends, the role of data and AI in shaping digital learning experiences, strategies for scaling programs sustainably, and the balance between quality, accessibility, and financial viability. Attendees gained valuable insights into how institutions are redefining their online learning strategies to expand access, improve student outcomes, and remain competitive in an evolving educational landscape.
The Rise of Non-Degree CredentialsAfternoon sessions kicked off with The Rise of Non-Degree Credentials–The Future of Higher Education led Michael Edmondson, Associate Provost, NJIT. He shared how traditional higher education is struggling to keep pace with workforce demands and a growing number of companies now favor skills-based hiring over degree-based credentials, with 70% of job skills expected to change by 2030 due to AI and automation. Employers increasingly prefer hiring candidates with business-oriented or technical microcredentials that provide real-world experience and rapid upskilling. Attendees learned how microcredentials can offer a solution by closing the workforce gap, aligning learning with employer needs, and fostering lifelong learning. As businesses prioritize AI, data analytics, and soft skills, microcredentials are emerging as the future of education, providing professionals with the flexibility to adapt and thrive in an evolving job market.
The Growth of Online Education at Rowan UniversityMike Sunderhauf, Director of Instructional Design, and William McCool, Online Course Operations Coordinator, from Rowan University led the breakout session, Shaping the Future: The Growth of Online Education at Rowan University, where they shared how Rowan University is undergoing a transformation in its approach to online education, shifting focus from traditional on-campus programs to scalable online offerings. The session explored how Rowan University is reshaping online education through a combination of strategic collaboration, program design, and faculty empowerment, ultimately fostering an engaging and dynamic learning environment for students and instructors alike. Sunderhauf and McCool shared how their learning strategy aligns with a flexible model, allowing instructors to follow a standardized framework while being empowered to update and adapt their courses to meet the needs and interests of each learner group. They showed how this balance between consistency and adaptability fosters both reliability and innovation in the learning process.
Integrating Virtual Reality into Higher EducationJoining EdgeCon from Seton Hall University, Renee Cicchino, Director of Instructional Design and Training, and Riad Twal, Senior Instructional Designer, gave a closer look into how the University has been experimenting with Virtual Reality since the release of the Google Cardboard Viewer in 2014. In 2024, they acquired a number of Meta Quest Pro devices, allowing for an entire class to participate in a virtual experience at the same time. During the 2024 Fall semester, Seton Hall launched a pilot program examining how they can best facilitate Virtual Reality experiences for graduate and undergraduate classes. These experiences are informing the development of a Virtual Reality Showcase to formally introduce this technology to their faculty, along with examples of how this technology can be incorporated into various course topics.
This session focused on the instructional designer’s perspective of evolving VR Technology, the introduction of the Quest for Business device management software, initial faculty and student experiences with the Meta Quest Pro devices, future plans for expanded access and utilization, and suggestions for implementation at other institutions.
“Excellent topics and sessions.” Charles WachiraGrace E. Cook, Ph.D., Program Area Lead of Computer and Mathematical Sciences, Montclair State University, has seen an increase in the number of English Language Learners (ELL) in her classroom, particularly students whose primary language is Spanish. By incorporating the use of ChatGPT and Google Notebook into the daily functions of the class, she has been able to make mathematics more accessible. In this session, Cook shared her experiences with teaching in English and Spanish with the assistance of AI and discussed the pros and cons of their uses and what her ELL students list as the positives and negatives of each resource.
Podcasting to Create Interdepartmental CollaborationIn Podcasting to Create Interdepartmental Collaboration and Showcase Faculty Innovation, Seton Hall University presenters, Kate Sierra, Instructional Designer, Teaching, Learning, and Technology Center, and Ann Oro, Senior Instructional Designer, explored how the Teaching, Learning, and Technology Center (TLTC) at Seton Hall launched a podcast series called Innovate and Educate to explore the intersection of technology and teaching. They highlighted how topics such as technology integration, accessibility tools, and other innovations are transforming learning, improving student outcomes, and addressing classroom challenges.
Attendees gained insights into the process of developing and sustaining a podcast, including identifying relevant resources, building an audience, and maintaining engagement. They also learned how this initiative has strengthened institutional resiliency by fostering a culture of innovation and shared learning. Participants were encouraged to consider how podcasting could serve as a scalable, cost-effective strategy for their own institutions to spotlight success stories and support strategic goals.
Creating an Institutional Culture of AccessibilityMany institutions approach accessibility in silos and rely on specialized offices rather than embedding responsibility across the entire campus. In Creating an Institutional Culture of Accessibility, Laura M. Romeo, Ph.D., Director of Learning Innovation, Development, and Scholarship, Edge, shared strategies for shifting to an integrated, institution-wide approach. The discussion looked at governance structures, training models, and change management strategies that can empower all departments to play a role in accessibility. Attendees gained practical insights on overcoming implementation barriers, methods to measure cultural progress in institutional accessibility efforts, and how to foster a culture that moves beyond compliance to true inclusivity.
Celebrating Achievements and ContributionsAt this spring event, Edge celebrated the exceptional work of higher education institutions across the region by presenting a series of awards to recognize their impressive impact and accomplishments.
The Preserving Education Through Change Award was presented to Montclair State University (MSU) and accepted by the University’s President, Jonathan Koppell, to recognize the significant vision and leadership by one of New Jerseys signature public universities. As the educational community navigates unprecedented change, it is vital that institutions seek opportunities for collaboration and support. MSU has been a primary example of this kind of support and stewardship in recent years. As MSU itself continues to grow, the institution has also shown a capacity for significant leadership, leading a merger with the former Bloomfield College, now Bloomfield College of Montclair State University. By preserving continuity and quality of educational opportunities at an institution noted for serving first-generation students and those from diverse backgrounds, MSU has strengthened New Jersey’s educational system and the college experience for their students.
In pursuit of educational and operational excellence, institutions across the country are working tirelessly to modernize systems and processes to better serve their communities. As funding and staffing challenges continue to affect institutions of all kinds, the effort to transform the educational experience can be difficult. The Member Technology Transformation Award was given to Felician University to recognize their transformative efforts in modernization through growing engagement with Edge and the member community, as well as a dedicated strategic focus on improvements in university infrastructure, digital experience, and excellence in the delivery of diverse educational offerings. The award was accepted on Felician’s behalf by Deanna Valente, Dean of the Center for Information Systems and Technology & Learning Development.
To recognize the remarkable commitment to expanding and innovating online education, Kean University was presented with the Online Learning Growth Award. As one of New Jersey’s fastest-growing institutions in digital learning, Kean University has demonstrated strategic leadership in developing flexible, high-quality online programs that meet the evolving needs of today’s learners. Through a forward-thinking approach and investment in instructional design, technology, and student support, Kean has significantly broadened access to education for diverse student populations across the region and beyond. The award was accepted on Kean’s behalf by Jay O’Callaghan, VP for Transformational Learning & Chief Online Officer.
Through the visionary leadership of their Center for Online Learning, Hudson County Community College (HCCC) has embedded accessibility and Universal Design for Learning into the foundation of its online course development. By proactively prioritizing inclusivity, leveraging data-informed strategies, and fostering a campus-wide culture of digital equity, HCCC has created a truly inclusive online learning environment. Their work not only meets ADA Title II compliance but sets a powerful example for institutions state-and-nationwide. To celebrate this innovation, dedication, and transformative impact on student success and educational access, HCCC was given the Accessibility in Digital Education Award, which was accepted on their behalf by Executive Director, Center for Online Learning, Matthew LaBrake.
By fostering meaningful dialogue and showcasing innovative practices, EdgeCon Spring 2025 not only sparked new ideas but also strengthened the connections that can help drive higher education forward. As institutions navigate a rapidly changing landscape, the shared insights and partnerships formed at EdgeCon play a vital role in shaping a more connected, resilient, and student-centered future.
VIP Sponsors Exhibitor SponsorsThe post EdgeCon Spring 2025 appeared first on NJEdge Inc.
Last Friday, April 11, three of us, Elettra Bietti, Aileen Nielsen, Laura Aade, co-organized a workshop titled “The Battle for Our Attention: Empirical, Philosophical and Legal Questions” which took place at Northeastern University School of Law, and benefited from the support of CLIC, Northeastern’s Center for Law, Information and Creativity, and the involvement of Harvard’s Berkman Klein Center community of fellows and faculty. The event brought together leading legal scholars, policymakers, economists, medical scientists, computer scientists, media scholars, and technologists to address the pressing issue of how today’s digital technologies are transforming the understanding, use, and allocation of human attention, including implications for how we spend our time and what information we consume.
The discussion was wide-ranging, interdisciplinary, and deeply enlightening. We discussed whether attention “actually exists”, how it works, the history and business models of attention capture, the challenges and findings that arise from empirical studies of attention and attention markets, the relation between attention, intimacy, convenience and the law of trademarks, possible analogies with tobacco and gambling litigation, and the policymaking associated with regulating engagement and children’s use of social media.
The event began with a panel on the political economy of attention. Yochai Benkler kicked off the discussion with an overview of the capitalist drive to capture and instrumentalize attention over time, beginning with the 19th century press and culminating in today’s digital technologies. He argued that markets won’t solve attention problems and could exacerbate attention harms, in contrast with Marshall Van Alstyne’s suggestion that a Coasean model of attention rights could help platform owners manage misinformation and reduce incentives to share inaccurate or false information. Where Benkler advocated for the decommodification of attentional experiences, Van Alstyne advocated for a market regime of incentives and individual rights to speak and listen. David Lazer, for his part, adopted a middle ground position, presenting several findings on the slow but steady decoupling of content from its sources. He showed that information has become less traceable to sources, and discussed chatbots’ role in producing knowledge that is increasingly divorced from reliable reference to authors and media sources.
The second morning panel addressed the empirics of attention. Michael Esterman discussed some of his clinical work, showing that attention is a fluctuating, fragile process deeply shaped by cognitive and environmental factors. Sustaining attention for long periods of time and across contexts remains a phenomenon that is not well understood, and Esterman presented results showing that blocking a population’s mobile phone access for two weeks could improve participants’ attention, as well as their mental health and well-being. Esterman also pointed to the need for increasing measurements outside of laboratory settings to better understand the external validity of fundamental psychological results related to attention. Elena Glassmann then approached attention from the perspective of an interface designer, emphasizing that platforms actively shape how users direct their attention — often without users realizing it. Glassmann highlighted the danger of decontextualisation, where AI-driven tools summarize content by stripping away critical context and leaving users unaware of biases or omissions, and suggested ways to help people build reality-grounded mental models that provide access to contextual information, rather than hiding complexity. Christo Wilson concluded the panel with an overview of empirical approaches to studying attention platform business models, highlighting his role with David Lazer in creating and hosting the National Internet Observatory at Northeastern, a center that offers tools for researchers to study how people behave online in response to particular design features and platform strategies over long periods of time.
During the lunch keynote, FTC Commissioner and Law Professor Alvaro Bedoya spoke of his effort building a team of doctors and psychologists at the FTC whose focus and expertise includes children’s mental health and well-being. He also spoke of his work advocating for children’s privacy under COPPA and of the analogies and differences between tobacco, sports gambling and addiction to technological devices and products. Commissioner Bedoya suggested that more research needs to be done to better understand which products, platforms and specific technological features cause addiction and other mental health disorders.
The afternoon began with a third panel on media and communication systems for attention capture. Nick Seaver presented a spirited argument that attention may, in fact, not exist at all. While holding and waving a mouse jiggler, Seaver showed that attention is primarily defined or constructed by the way it is measured. Measurement, in turn, serves primarily as a managerial tool of control. While they might appear to be measuring participation, platform designers are in reality disciplining, tracking and controlling populations. Bridget Todd spoke of her work in the podcasting world, emphasizing the relation between intimacy and attention: audiences pay attention based on proximity to particular types of content and the emotions that content generates for them. Her view is that the current digital economy prioritizes profitable outrage over thoughtful storytelling, but that we should always push for the latter. Emily West presented some of her research on Amazon through the lens of convenience. Attention and addiction to digital products are promoted by appealing to convenience: platforms engineer frictionless experiences to generate user dependencies, producing a culture of learned passivity and inattention that quietly erodes agency. Rebecca Tushnet spoke of the law of advertising and the doctrines of dilution and confusion under trademarks law, explaining that the law simultaneously invokes but misunderstands the science of, or empirical realities of, human attention, protecting only those parts of attention that can be owned under intellectual property regimes. Similar to Seaver’s argument that attention is effectively what we can measure, Tushnet’s presentation highlighted that we live in an economy of signals and containers of attention.
The day ended with a panel discussion on legal and policymaking efforts in the attention space. Richard Daynard shared key takeaways from his litigation experience fighting tobacco and gambling companies. He explained that these industries intentionally engineer their products to addict users while funding research that shows the exact opposite, namely that their products are not addictive and individuals who engage in excessive use are the ones to blame. He added that these companies often lose in product liability litigation, where strict liability regardless of intention is the standard. Zephyr Teachout then offered an overview of the evolving Supreme Court jurisprudence on the First Amendment, arguing that current shifts in the court’s composition and caselaw are opening the door to possible legislation and reform in the attention space, something that until recently seemed largely implausible. Leah Plunkett discussed state social media laws and described them as providing financial compensation, privacy safeguards for children and workplace protections. She focused on a recent Utah law that allows children to sue their parents for compensation when their image is used in their parents’ social media feed for profit and described her involvement in drafting a model law on this theme for the Uniform Law Commission. Woody Hartzog concluded the panel presentations, discussing his work with Neil Richards on wrongful engagement, a tort which would allow individuals to sue digital companies for profiting from their addiction and engagement while neglecting users’ well being.
The event ended with participants discussing potential research overlaps, future collaborations and potential for advocacy across US regions. In the words of CLIC Director Alex Roberts, who moderated the last panel, “[i]f an interdisciplinary field of “attention studies” wasn’t already a thing, it is now.”
This event would not have been possible without support and assistance from Northeastern’s CLIC, Alexandra Roberts, Jennifer Huer, Walaa Al Awad, Natalia Pifferer, Brad Whitmarsh, and Jacob Bouvier. We also thank Harvard Law’s Laura Zeng, and BKC’s Bey Woodward and Jonathan Zittrain for additional enthusiasm and assistance.
We hope to continue this important conversation in the months and years to come with all of you. If you would like to join future conversations, we have created a regional mailing list which you can sign up to here.
Reporting from “The Battle for Our Attention” Workshop @ Northeastern, April 11, 2025 was originally published in Berkman Klein Center Collection on Medium, where people are continuing the conversation by highlighting and responding to this story.
Today, we’re sharing important news about the future of Ceramic.
With the rapid rise of agents as the new critical user and interface to the web, it’s increasingly clear that supporting a healthy, trustless, decentralized model for AI is essential. Our team, part of Recall Labs after our merge with Textile, is shifting our primary focus to Recall: a platform where AI agents prove themselves, improve themselves, and earn for their intelligence.
As part of this, we’ll be repurposing parts of Ceramic, deprecating others, and introducing a standalone open source version for the community. We want to openly communicate what this means for you and your applications built on Ceramic, as well as outline the path forward.
From Ceramic to RecallOver the years, Ceramic has been pivotal in helping us understand decentralized data, composability, and robust synchronization. It’s been the leading technology for dozens of applications and networks in need of scalable, edge-centric, verifiable data storage and replication. We spent more than five years developing and iterating on Ceramic and are immensely proud of the advancements and technology we’ve shipped.
Even more than that, we are so thankful for your partnership. Our customers and community have relentlessly innovated on brand new tech, helped us move from prototypes to stable networks, and been the driving force for our continued commitment to open, decentralized data. Thank you for all of your support, and all you’ve helped us learn.
One of the persistent challenges we’ve faced with Ceramic has been the UX of decentralized data systems: managing keys and signing data, novel access control flows, and counterintuitive patterns for data flow have been challenges for many users and developers. Interestingly, all these properties are native and intuitive for the internet’s new class of users: agents.
Recall is a cryptoeconomic platform for AI agents to prove, improve, and earn based on their intelligence. It builds heavily on the technology we’ve built previously, but is a new platform and demands our full commitment.
What This Means for Ceramic UsersWe deeply appreciate the trust and investment you’ve made in Ceramic. While our priority is Recall, we remain committed to setting Ceramic users up for success as we sunset certain parts of Ceramic.Here’s exactly what you need to know:
Introducing ceramic-oneCeramic-one is the most performant, stable, and decentralized implementation of Ceramic. There is a new client SDK for reading/writing to ceramic-one: https://github.com/ceramicnetwork/rust-ceramic/tree/main/sdk
Ceramic-one introduces Recon, enabling reliable synchronization and historical data syncing across nodes. All existing data from ComposeDB remains fully accessible and readable through ceramic-one.
Ceramic-one will continue to function independently, with no dependency on our infrastructure or any other centralized authority. Further, Ceramic-one is under an MIT license, so anyone who wants to fork it and continue developing it on their own is welcome to do so.
Anchoring and Conflict Resolution on Ceramic-oneWe’re committed to completing one final critical feature for ceramic-one: self-anchoring to Recall. This allows fully decentralized timestamping without relying on our centralized CAS or Ethereum L1, making ceramic-one truly self-sufficient. This will be implemented sometime after the mainnet release of Recall.
Manual conflict resolution is currently possible through ceramic-one by exposing all stream HEADs, allowing applications to apply their own business logic to select which branch(s) of the stream's history they wish to use. We’re also considering building automated conflict resolution (based on anchor timestamps)—the only remaining functionality from js-ceramic not yet implemented in ceramic-one. If this feature is important to your use case, please reach out, as your feedback will influence our decision.
After these improvements, ceramic-one will reach a feature-complete MVP. At that point, we will only prioritize critical bug fixes. ceramic-one will continue to exist as stable software under an open MIT license—fully available for anyone who wishes to fork, improve, or independently evolve it.
Deprecation of js-ceramic and ComposeDBEffective immediately, we’re deprecating js-ceramic and ComposeDB. We recommend migration to ceramic-one, the streamlined, performant successor, as soon as possible.
Migration Steps:
A new client SDK for ceramic-one is now available here: ceramic-one SDK Step-by-step migration guidance is provided in our upgrade guides: Migration Guide on Ceramic Blog Detailed Upgrade Instructions on GitHubTimeline and Support
ComposeDB and the Ceramic Anchor Service (CAS) will be completely shut down at least one month after Recall’s Mainnet launch. (Exact date TBD, but expected in mid-2025.). After that date, ComposeDB-dependent apps will break if not migrated to ceramic-one.
Recall: The Future of Decentralized IntelligenceCeramic’s legacy and your contributions have directly influenced our development of Recall. Many of Ceramic’s strengths—openness, transparency, and decentralization—live on and evolve within Recall’s cryptoeconomic framework.
Recall aims to become a vibrant ecosystem for AI builders and developers. We warmly invite Ceramic users to explore the opportunities Recall offers: building intelligent agents, participating in verifiable competitions, and earning from proven performance.
We’re grateful for your support of Ceramic and excited about this next chapter with Recall. Our team remains available to assist your migration and ensure your continued success. If you have questions related to Ceramic, please reach out to us here. If you’re interested in learning more about Recall, find us on Twitter!
Warm regards,
The Recall Labs Team
Internet Safety Labs testified in support of the Massachusetts Consumer Data Privacy Act (H78) and Massachusetts Data Privacy Act (H104), advocating for strong data minimization, restrictions on sensitive data sales, and robust enforcement to protect residents’ privacy. We’re grateful to the Massachusetts Legislature for hearing our testimony. The written testimony is available to view, along with a video of the testimony below:
The post Internet Safety Labs Provides Testimony for Massachusetts Data Privacy Acts appeared first on Internet Safety Labs.
As regulations evolve and technology advances, education institutions face increasing pressure to ensure compliance across:
Technologies such as AI, data analytics models, and more are reshaping compliance. This symposium brings together experts and institutional leaders to explore:
Attendees will gain practical insights to:
Strengthen institutional compliance
Protect sensitive data
Foster a secure, inclusive digital environment while navigating the complexities of modern education.
Topics to be explored during the Symposium include, but are not limited to:
Data Privacy & Governance
Digital Accessibility & Inclusive Design
Regulatory Compliance & Legal Frameworks
Regulatory Cybersecurity Compliance (GLBA, PCI, NIST, and more)
Compliance and Cyberinsurance
AI, Automation, and Risk
Institutional Risk Management & Resilience
Vendor & Third-Party Risk Compliance
Culture of Compliance: Training & Leadership
Emerging Threats & Future-Proofing Compliance
Case Studies in Compliance Success
Who should attend the SymposiumCabinet Level Leaders
Provosts & Academic VPs
CIOs
CISOs
Compliance Officers
Risk Management Officers
Directors of Online Learning
Directors of Centers for Teaching and Learning
Register Now » Vendor/Sponsorship OpportunitiesExhibitor Sponsorships are available. Vendors may also attend the conference without sponsoring, but at a higher ticket price.
Contact Adam Scarzafava, Associate Vice President for Marketing and Communications, for additional details via adam.scarzafava@njedge.net.
Download the Sponsor Prospectus Now » Call for ProposalsSubmit your presentation topic for the upcoming Compliance in Education Symposium, presented by Edge and Stockton University!
Submit Proposal » More information coming soon!The post Compliance in Education Symposium appeared first on NJEdge Inc.
Date: October 9, 2025
Location: Rider University
Time: 9 a.m.-5 p.m.
Attendee Ticket: $49
Event Location:
Rider University
Submit your presentation topic for the upcoming EdgeCon Autumn 2025 conference! This year’s conference will focus on accelerating modernization efforts for cybersecurity, campus networks, cloud strategy, student support applications, and more.
The call for proposals will be open until July 24th, with presenters being notified if their session was selected by July 31st. Submit Proposal » Vendor/Sponsorship Opportunities at EdgeConExhibitor Sponsorship and Branding/Conference Meal sponsorships are available. Vendors may also attend the conference without sponsoring, but at a higher ticket price of $250.
Contact Adam Scarzafava, Associate Vice President for Marketing and Communications, for additional details via adam.scarzafava@njedge.net.
Download the Sponsor Prospectus Now » AccommodationsHilton Garden Inn Princeton Lawrenceville
1300 Lenox Drive
Lawrenceville, NJ 08648
You may also reserve a room, by contacting the hotel directly via 609-895-9200. Be sure to mention that you’d like your reservation under the “EdgeCon Group Block” to obtain the conference rate.
Details of Conference Proceedings and Submissions Form are now available »The post EdgeCon Autumn 2025 appeared first on NJEdge Inc.
Six months ago, Hyperledger Indy on Besu officially joined the did:indy method with the introduction of the did:indy:besu identifier. This milestone brought Hyperledger Indy, an LF Decentralized Trust project, closer to becoming a key player in Self-Sovereign Identity (SSI) frameworks, with the potential to be a Trusted List Provider in the European Digital Identity Wallet (EUDI Wallet) under eIDAS 2.0. By aligning with W3C Verifiable Credentials (VC) and Decentralized Identifiers (DID) standards, Indy on Besu enhances interoperability, scalability, and usability for digital identity solutions.
According to the Harvard Review, 96 percent of commercial programs include open source software, translating to $8.8 trillion of value to businesses worldwide. And the decentralized technology market, per Fortune Business Insights, is set to grow from $86.53 billion in 2025 to $457.35 billion by 2032.
Blockchain Commons focused on the ZeWIF project for the Zcash blockchain during the first quarter of 2025, but that didn’t stopped us from also advancing a few other priorities, including the initial release of our long-incubating Open Integrity project. Here’s what all we’ve been working on.
ZeWIF Specification:
Why It’s Important The ZeWIF Meetings What We’ve Released So Far What’s Still to ComePost-Quantum Commons:
Why It’s Important PQC Meeting QuantumLink What We’ve Released So FarOpen Integrity:
Why It’s Important What We’ve Released So FarNew Articles:
The Right to Transact SSI Orbit PodcastNew Dev Docs:
SSKR Pages UR Pages Meetings Pages Improved Crate DocsNew Ports:
Lifehash to RustNew Research:
Provenance Marks Provenance References ZeWIF SpecificationThe Zcash extensible Wallet Interchange Format (ZeWIF) has been Blockchain Commons’ main priority since the Zcash Community Grants program approved our proposal at the end of the year.
Why It’s Important. Blockchain Commons talks a lot about interoperability, and that’s what ZeWIF is: it’s a way to freely exchange data between Zcash wallets in a standardized form. What we don’t talk about as often is why interoperability is important.
It goes back to our Gordian principles. Interoperability supports at least three of them.
It supports independence because any user can freely move their data among the interoperable software systems. It supports openness because any developer can easily join the ecosystem by adopting a mature, well-understood specification. This creates an environment of coopetition (cooperative competition) that leads to advances in technology and usability. It supports resilience because the interoperable format makes data less likely to be lost: it’ll be in a form that will be understood and therefore accessible, well in the future.For a wallet ecosystem, interoperability means that users will have the freedom to move their digital assets among Zcash wallets. That’s the precise independence we want users to have, which is why it’s been worth spending a few months of time on this project.
The ZeWIF Meetings. To support the ZeWIF project, Blockchain Commons held three meetings in the first quarter: on the initial wallet survey, on the first demo of the zmigrate tool, and on ZeWIF data abstractions. Meeting with developers to ensure that specifications serve everyone’s needs has always been a bedrock policy for Blockchain Commons, so we’ve of course extended it here. (At least one more meeting is planned, for April 16th, to demo the data file format.)
#1: Wallet Survey: #2: Zmigrate demo: #3: Abstraction Discussion:What We’ve Released So Far. The ZeWIF project is built around the zewif library, which stores Zcash wallet data in an in-memory format. We’ve also written zewif-zcashd and zewif-zingo to demonstrate the import of data from those wallets to ZeWIF, while our partners at Zingo Labs have produced “zewif-zecwallet” for importing zecwallet data (though that PR hasn’t been merged yet). Finally, we authored zmigrate, which is a demo CLI for importing zcashd content. With these demos and libraries in hand, other wallet developers can start working to interchange their own data via the ZeWIF format.
What’s Still to Come. With our releases so far, data can be interchanged between different Zcash wallets as long as it’s all done on the same machine: you just import into the in-memory ZeWIF format from one wallet, then export to another. But we expect most use cases will instead involve at least two different machines. That’s where the ZeWIF file format comes into play. Building on Gordian Envelope, it translates the ZeWIF in-memory storage into a fixed file that can then be moved to a different machine. We expect to demo the Envelope ZeWIF format at that upcoming April 16th meeeting.
Post-Quantum CommonsIs Post-Quantum Cryptography (PQC) the next big thing? We were able to support our friends at Foundation with some PQC work just as we got the ZeWIF project going at the start of the year.
Why It’s Important. Quantum Computers are starting to appear, and though they’re far, far from what would be needed to break cryptography at the moment, there’s no telling when there’s going to be a sudden “quantum” leap. Though today’s cryptography might not need PQC, if you’re working on something that might be around for 5 or 10 years, you should be thinking about it!
PQC Meeting. Our March Gordian meeting contained all the details of our PQC work, including what Quantum Computing is.
QuantumLink. The highlight of the meeting was a discussion of QuantumLink from our friends at Foundation. This is a quantum-resistant protocol for Bluetooth communication that is a critical component of their new Passport Prime wallet. It allows them to use Bluetooth with high security and for that security to remain strong 5 or 10 years in the future, in case Quantum Computing does make those big strides forward.
What We’ve Released So Far. The QuantumLink technology was enabled by new PQC support that Blockchain Commons incorporated into its libraries. You can now use the PQC algorithms ML-DSA and ML-KEM in our Rust stack, from bc-componenents on up.
Open IntegrityWe had one other big release in Q1: Open Integrity, a system for increasing the trust of software being developed in Git. Well, technically it was released on April 7th, the 20th anniversary of Git’s release, but this is something that Blockchain Commons Architect Christopher Allen has been working on for about a year, so we’re thrilled to get the word out.
Why It’s Important. We wrote a whole article discussing why Open Integrity is important. But, in short: Git repos are growing increasingly important for the deployment of critical software, and Git doesn’t actually provide a high-level of trust for those repos, despite the ability to sign commits. Open Integrity bridges the gap betwen what Git offers and what online software distribution needs.
What We’ve Released So Far. The Open Integrity project is now available in an early form at GitHub. A Problem Statement offers details on the issues we’re trying to solve and our solutions. We’ve also got a number of tools and scripts, the most import of which creates an inception commit on a repo. This inception commit is the root of trust that lays the foundation for ensuring that you always know who’s in control of a repo.
If you want to try out Open Integrity, see the Open Integrity Snippets file, which has complete instructions on how to get that inception-commit script running, plus many examples that will allow you to experiment with Open Integrity.
New ArticlesThat’s it for our big Q1 projects, but we had a number of smaller releases over the course of the quarter as well.
The Right to Transact. We think the right to transact should be an international freedom. Read more in Christopher’s recent article, “The Case for an International Right to Freedom to Transact”, which builds on his 2024 musing, “How My Values Inform Design”.
SSI Orbit Podcast. Christopher also was interviewed for the SSI Orbit Podcast. It’s been nine years since he wrote the foundational “Path to Self-Sovereign Identity”. Where do things stand today?
New Dev DocsSSKR Pages. We updated our SSKR dev pages with the big focus being differentiating SSKR URs (where you split up a key) and SSKR Envelopes (where you protect the entire contents of an Envelope by creating and splitting up a key). The test vectors also now demonstrate both cases.
UR Pages. We similarly did some big updates to our UR dev pages. Here the issue was that URs had seen some changes over the years, especially as we locked down CBOR tag registration, and our examples were no longer up-to-date. Now, every page and every test vector should be correct. There’s also a new page on URs for Gordian Envelope.
Meetings Pages. All of Blockchain Commons’ meetings are now documented on a new meetings page, which also includes subpages with videos and slides (and usually transcripts) of everything from the last few years!
Improved Crate Docs. Finally, we’ve used some lessons learned from the documentation of ZeWIF to improve the documentation of our Rust stack. As a result, docs.rs now has improved docs for bc-dcbor, bc-components, and bc-envelope
New PortsLifehash to Rust. Lifehash is now available in a new Rust Implementation courtesy of Ken Griggs. Lifehash is a critical element of the object identity block, which can help users to recognize seeds and keys. We hope this will allow for more deployment.
New ResearchProvenance Marks. One of our newest innovations, courtesy of Blockchain Commons Lead Researcher Wolf McNally, is the provenance mark. The provenance mark is a cryptographically secure chain of marks that facilitates the easy verification of authenticity. We’ll have more on them in the months ahead, but in the meantime, you can read Wolf’s research paper on the topic!
Provenance References. If you want to start playing with provenance marks right now, we’ve already released a series of reference apps and libraries. They include our original Swift implementation, our newer Rust implementation, and a CLI that can be used to create and manage marks!
That’s it for the moment. For the next quarter, we’ll be closing out our initial work on ZeWIF in April, and we’ll be offering more looks at Provenance Marks in the months ahead.
If you’d like to work with us on these or other topics, drop us a line about becoming a Blockchain Commons partner.
We Are Open Co-op (WAO) is a collective of individuals who share a commitment to ethical, inclusive, and sustainable practices in all aspects of our work, including AI literacy. Our approach to this area is grounded in the belief that AI is an extension of digital literacies, not a separate field. We aim to demystify AI, helping people recognise that the digital literacy skills they already possess are directly applicable to AI.
AI’s societal impacts are significant and well-documented, but the public often struggles to grasp its opportunities and risks. A trusted guide is needed to balance optimism with caution, particularly given AI’s potential effects on jobs, culture, and other critical areas.
The projectWe are delighted to have started work on a new project with the Responsible Innovation Centre for Public Media Futures, hosted by the BBC. As an institution with a long and valued history in delivering high quality educational initiatives that help audiences understand and navigate new technologies, we believe the BBC is well placed to help public understanding and engagement with emerging technologies such as AI.
Our focus is research and analysis which aims to find gaps in provision for younger audiences. Over the next few months we’ll be interviewing experts, reviewing resources, and scrutinising frameworks. We will be using this research to create a report along with a framework and guide which will ultimately help the BBC create policies and content for young people. We’ll be sharing an open version of what we create, so look for those in the summer.
Our approachStarting with desk research, and building on the work we’ve already curated, we’re creating a library of interesting definitions, frameworks, and resources that can help us understand what other people are exploring when it comes to AI Literacy in combination with public media.
As with our work with Friends of the Earth where we researched AI and environmental justice, we will bring together a variety of experts to give feedback and sense check what falls out of some of the research.
Along the way, we’ll share updates based on our findings, so if you know of a person, organisation, initiative, framework, or resource that we should take a look at, please let us know! We’ll also be updating ailiteracy.fyi, our one-stop-shop for all things related to this important topic.
What does AI Literacy look like for young people aged 14–19? was originally published in We Are Open Co-op on Medium, where people are continuing the conversation by highlighting and responding to this story.
The Decentralized Identity Foundation (DIF) has officially launched its Hospitality & Travel Working Group, evolving from the ongoing H&T Special Interest Group (SIG). This new working group will focus on developing standards, schemas, processes, and documentation to support the self-sovereign exchange of data between travelers, services, intermediaries in the hospitality and travel industry, and their AI agents.
Mission and FocusThe primary goal of the working group is to enable travelers to maintain ownership and control over their personal data while allowing for seamless interactions with travel service providers. The group will address critical aspects of traveler profiles, focusing on data portability, privacy, and interoperability across the travel ecosystem.
Meeting ScheduleThe working group will convene twice weekly:
Tuesdays at 10:00 AM Eastern Time Fridays at 10:00 AM Eastern TimeThese regular meetings will facilitate ongoing collaboration among industry stakeholders, technology providers, and standards bodies.
Leadership Perspectives"We are enormously proud that the hard work of our dedicated Hospitality & Travel SIG has led to a Working Group that will develop key specifications. From sharing boarding passes and hotel preferences to loyalty programs and dietary requirements, travelers constantly provide the same information to different companies throughout their journey.
"Travelers need to share, and overshare, the broadest range of personal data on their voyage, ranging from seat or dietary preferences, to government identities or passports, across multiple service providers who may not have the best data handling practices. This working group will develop standards that allow travelers to control exactly what data they share, with whom, and for how long - eliminating both unnecessary data exposure and the frustration of repeatedly entering the same information," said Kim Hamilton Duffy, Executive Director of the DIF.
Douglas Rice, industry veteran and Hospitality & Travel Working Group chair added: "The travel ecosystem has long struggled with fragmented approaches to customer data and identity management. This working group will help establish the technical foundations needed for travelers to maintain control of their data while enabling the personalized experiences they expect. We're building toward a future where your travel preferences, loyalty information, and credentials can move with you seamlessly across your journey—all while maintaining the highest standards of privacy and security, and enabling AI agents to act on verified information about the traveler.”
Next StepsThe working group will initially focus on defining standardized schemas for travelers or their digital agents to present profiles, establishing protocols for secure data exchange, and developing guidelines for implementation across various travel touchpoints. Industry participants are encouraged to join the working group to contribute their expertise and perspectives.
For more information about participating in the DIF Hospitality & Travel Working Group, visit the Decentralized Identity Foundation website.
On March 18, 2025, the FIDO Alliance convened its APAC regional members and key stakeholders at the Telecommunications Technology Association (TTA) Auditorium in Seongnam, South Korea, for a full-day meetup and workshop. The event focused on advancing simpler, stronger authentication across the region and served as a vital platform for technical updates, regional progress, and real-world implementation insights around passkeys.
Among the 70+ participants on-site, we were honored to welcome six FIDO Alliance Board members representing Samsung Electronics, NTT Docomo, Lenovo, RaonSecure, Egis Technology, and Mercari—underscoring the global engagement and strategic importance of this gathering.
Before the main program, international attendees were invited to a special TTA Lab Tour, offering a behind-the-scenes look at Korea’s testing and standards infrastructure supporting FIDO and other telecommunications technologies.
Showcasing Technical Leadership and Regional Collaboration
The day featured an exciting lineup of expert speakers and educational sessions, reflecting the expanding role of passkeys as a trusted, phishing-resistant, and user-friendly authentication solution for both public and private sectors.
The event opened with an inspiring keynote by Dr. Koichi Moriyama (Chair, FIDO Japan WG; W3C Advisory Board Member), who emphasized the importance of global collaboration in setting interoperable, secure technology standards. David Turner, Senior Technical Director at FIDO Alliance, shared in-depth updates on passkey advancements and highlighted future areas of focus, including developer support, user experience, and broader international engagement. Wei-Chung Hwang of ITRI presented a thoughtful comparison of passkeys and PKI, outlining how the two can coexist and complement each other within modern authentication architectures. Ki-Eun Shin, Principal Software Engineer and FKWG Vice Chair, offered a practical guide for developers building scalable and secure passkey systems, covering implementation, testing, and UX considerations. Dovlet Tekeyev from AirCuve introduced Korea’s updated Zero Trust Guideline 2.0, walking the audience through key principles, recommendations, and how FIDO solutions align with national cybersecurity strategies. Eugene Lee, Vice President at RaonSecure, shared cross-industry deployment experiences of FIDO-based biometric authentication, highlighting its adaptability to diverse sectors including finance and telecom. Jong-Su Kim, Principal Security Engineer at Samsung Electronics, concluded the technical sessions by sharing Samsung’s vision of simplifying cybersecurity for all users through FIDO-driven innovation.Regional Insights and Shared Momentum
The day closed with regional updates featuring representatives from Japan (Naohisa Ichihara, FJWG Co-Vice Chair and CISO at Mercari), China (Henry Chai, FCWG Chair and CEO at GMRZ Technology, Subsidiary of Lenovo), Taiwan (Karen Chang, FTF Forum Chair and VP at Egis Technology), Malaysia (Sea Chong Seak, CTO at Securemetric), and Vietnam (Simon Trac Do, CEO & Founder at VinCSS), each presenting local updates on passkey deployment. Speakers shared technical challenges, user adoption, and the growing importance of cross-border cooperation to accelerate the passwordless future across APAC.
Moving Passwordless Forward Together
The FIDO APAC Regional Member Meetup & Workshop reaffirmed our collective commitment to advancing phishing-resistant passwordless authentication across the region. Thanks to all the speakers, sponsors, and attendees who contributed to this energizing and forward-looking event.
Stay tuned for more cross-regional collaborative events in the APAC and updates from the FIDO Alliance as we continue to make online authentication simpler and stronger together.
Webinar
March 27, 2025
10:00 AM ET
Join us for an insightful session on the successful implementation of the Zoom Phone project at Stevens Institute of Technology. This initiative aimed to modernize the campus communication infrastructure by transitioning from traditional phone systems to the innovative Zoom Phone service. The project involved meticulous planning, procurement, and deployment phases, ensuring a seamless transition for all departments.
We will delve into the change management strategies employed to facilitate this significant shift, including comprehensive user training, stakeholder engagement, and continuous support. The project management efforts were pivotal in coordinating the migration of facilities phones, decommissioning outdated systems, and ensuring compliance with new regulations such as the Ray Baum Act and Kari’s Law.
Discover how the collaborative efforts of the IT team with institutional stakeholders led to the project’s success. We will share key insights, challenges faced, and the remarkable outcomes achieved, including enhanced remote work capabilities, improved emergency calling services, and significant cost savings.
This session will provide valuable lessons for institutions looking to upgrade their communication systems and navigate the complexities of large-scale IT projects.
Presenters:
Maryam Mirza, Senior Director for IT, Client Experience and Strategic Initiatives, Stevens Institute of Technology
Hammad Ali, Senior Director of Infrastructure Services, Stevens Institute of Technology
Luis Quispe, Associate Director of Network and Telecom Engineering, Stevens Institute of Technology
Complete the Form Below to Access Webinar Recording [contact-form-7]The post Transforming Communication: The Zoom Phone Project at Stevens Institute of Technology appeared first on NJEdge Inc.
Let’s be honest. The world feels…complicated right now. The news is relentless, anxieties are high, and it’s easy to feel isolated. But be assured, there are people, even now, who get up every day and try to make the world a little better. As ever, we’re pleased to work with such people to try and build deeper, more resilient communities — and to remember that we’re not alone.
Building Open Communities cc-by-nd Bryan Mathers for WAOWe Are Open Co-op is working in collaboration with Amnesty International UK (AIUK) on a project focused on a new platform for activists. This is built on a shared fundamental belief that the most impactful change comes from strong, engaged communities.
Like many of the organisations we work with, AIUK has a vibrant and diverse network of activists. Communities and networks are complex and come with common challenges. There can be confusion, discomfort, and the feeling that things aren’t really resulting in meaningful action. There are so many people and activities it can be hard to create cohesion and connection. Understanding who people are, what drives them and listening to them is at the core of this project.
Our Community Platform project with AIUK is a place where we can put our deep understanding of how communities thrive to good use.
What are we doing? Starting with empathy: We know that understanding different perspectives and experiences is hugely important for building healthy communities. Our collaboration will be rooted in actively listening to and valuing the insights of people committed to AIUK’s mission. Spurring collaboration: We are new to the AIUK community, so we are looking forward to collaborating with people with diverse voices and new approaches. We are pleased that we’ll be collaborating with AIUK staff, educators, activists and Torchbox, an agency helping with Amnesty International UK’s digital transformation. Showing up: We love to step outside of our comfort zones, to experiment with new ways of working, and to be open to the possibility of failure — because learning from mistakes is kind of great. No, what are you actually doing?Oh, ok, we’ll if you want to be really specific:
Carrying out user research to better understand the AIUK community and how a community platform can help AIUK better support them Collaborating with the DDaT team and AIUK’s new Community manager to create new processes and policies which will help eliminate frustrations and problem areas Creating missions to help playtest a number of candidate community platforms that have the potential to help AIUK’s activists connect, learn and thrive The Power of the NetworkWe’re not just isolated individuals pursuing our own interests. We’re part of a larger, interconnected web. People tend to be members of multiple communities, so insights from those overlapping experiences are valuable. This crossover, this overlapping of passions and interests, is a vital strength, but it also needs intentional nurturing.
cc-by-nd Bryan Mathers for WAO Moving ForwardWe’re committed to building healthy communities, and that means planned, moderated, and actively cared for communities. We’re going to, as we do, work openly with Amnesty International UK, and are, as ever, always excited for your engagement and feedback.
We have some spare capacity on top of this work, so if you have a problem, if no one else can help, get in touch!
Strengthening Community was originally published in We Are Open Co-op on Medium, where people are continuing the conversation by highlighting and responding to this story.
Organizations are encouraged to take the Passkey Pledge ahead of World Passkey Day on May 1
MOUNTAIN VIEW, Calif., April 9, 2025 – The FIDO Alliance is inviting organizations around the world to take the Passkey Pledge, a voluntary commitment to increase awareness and adoption of passkey sign-ins to make the web safer and more accessible.
Since passkeys were introduced to the world in 2022, hundreds of service providers have embraced the greater security and usability that passkeys bring to users. Over 15 billion user accounts are now equipped with the option to use a passkey instead of relying on passwords, which are easy to steal and reuse for account takeovers and fraud. Organizations that deploy passkeys are consistently finding that a greater percentage of their users are able to sign into services in far less time – which helps generate added revenue and/or employee productivity while also reducing fraud and account takeovers.
To further advance and promote the use of passkeys, the first Thursday in May each year is now recognized as World Passkey Day (previously World Password Day). Companies can take the Passkey Pledge in advance of World Passkey Day and commit to making a good-faith effort to achieve the following goals throughout the year:
For service providers that have an active implementation of passkeys for sign-in – Within one year of signing the pledge, demonstrate actions taken to measurably increase the use of passkeys by users when signing into the company’s services. For service providers that are in the process of implementing passkeys for sign-in – Within one year of signing the pledge, demonstrate measurable actions taken to enable passkeys for signing into the company’s services. For vendors with a FIDO-based products and/or service – Within one year of signing the pledge, demonstrate actions taken to measurably increase the use of passkeys through adoption of the company’s products and/or services. For vendors developing FIDO-based products and/or services – Within one year of signing the pledge, demonstrate measurable actions to FIDO certify its products and launch a product or service with passkey sign-in support. For industry associations and standards organizations – Within one year of signing the pledge, demonstrate actions to increase the visibility and benefits of passkey sign-ins.Organizations that take the pledge will receive assets to support their involvement and will have the opportunity to take part in activities and announcements planned for World Passkey Day on May 1, 2025.
More details on the pledge, including the sign-up form, can be found at https://fidoalliance.org/passkeypledge/.
Taking Action: Resources to Help Organizations to Fulfill the PledgeThe FIDO Alliance has resources and best practices for Passkey Pledge organizations to take action, including:
Sharing their commitment to the Passkey Pledge via external communications channels Leveraging the guidance on passkeycentral.org to plan, implement and expand their passkey rollouts Implementing the FIDO Design Guidelines, data-driven UX best practices for passkey rollouts Getting their products FIDO Certified to demonstrate that their products are compliant, interoperable and secure Releasing case studies on their or their customers behalf to share implementation journeys and business outcomes. Organizations can reach out to info@fidoalliance.org to submit case studies directly to the FIDO Alliance Taking part in the FIDO Alliance member activities and working groups to further drive passkey optimization and adoption Planning and/or taking steps to remove passwords as a sign-in option. About the FIDO AllianceThe FIDO (Fast IDentity Online) Alliance was formed in July 2012 to address the lack of interoperability among strong authentication technologies and remedy the problems users face with creating and remembering multiple usernames and passwords. The FIDO Alliance is changing the nature of authentication with standards for simpler, stronger authentication that define an open, scalable, interoperable set of mechanisms that reduce reliance on passwords. FIDO Authentication is stronger, private, and easier to use when authenticating to online services. For more information, visit www.fidoalliance.org.
ContactDid you know that the first coupon dates back to the 1800s?
It hasn't changed much for something that's been around that long, and how shoppers redeemed coupons stayed the same…until now.
In this episode, Brett Watson, CEO of The Coupon Bureau, joins hosts Reid Jackson and Liz Sertl to talk about how the world of coupons is finally catching up. With digital formats, real-time validation, and built-in fraud prevention, coupons are evolving for the modern retail experience.
Brett explains what it takes to bring an old system into the digital age, why coupon fraud has become such a costly issue, and how these changes are unlocking new opportunities for both brands and consumers.
In this episode, you’ll learn:
How couponing has evolved over the years
The role of coupons in marketing and media
The future of coupon technology
Jump into the conversation:
(00:00) Introducing Next Level Supply Chain
(03:31) History of couponing
(09:45) The scale of coupon fraud in the industry
(14:56) The new coupon standard
(18:54) Different innovations and applications of coupons
(23:23) Brett’s favorite tech she can’t live without
Connect with GS1 US:
Our website - www.gs1us.org
Connect with the guest:
Brett Watson on LinkedIn
The post Learn & Work Ecosystem Library Glossary appeared first on Velocity.
Demand for computing power is exploding as AI models, cloud services, and decentralized applications consume ever-greater amounts of energy. But not all energy is created equal. In some regions, a data center might be drawing power from a coal-heavy grid, while elsewhere excess wind or solar power goes unused. For the past decade, tech giants addressed this by buying renewable energy or improving efficiency. But what if we could take a more dynamic approach — shifting computational workloads across location and time to maximize use of clean energy?
Enter Carbon-Aware Nomination, a groundbreaking mechanism from Energy Web that does exactly that. It ensures computational tasks run where and when they have the lowest carbon intensity. This isn’t a standard cloud feature or a typical blockchain project, but a fusion of both: it connects decentralized computing networks with real-time carbon intensity data from WattTime and the Green Software Foundation’s Carbon-Aware SDK. And the best part — it’s going open-source, inviting the entire industry to build a greener digital future together.
The Carbon Cost of Computing: Why It MattersComputing today has a massive carbon cost. Data centers account for around 1–2% of global electricity use (on par with the airline industry), and that share is rising fast. Some forecasts warn that with the AI boom, data center energy demand could hit double-digit percentages of global consumption by 2030. In short, if we don’t make computing more sustainable, it will hinder global climate goals. Today’s solutions to decarbonize workloads rarely have real-time “carbon awareness.” This means a lot of flexible computing — like batch processing or AI training jobs — might be running at the worst possible times for the planet. Carbon-Aware Nomination changes that paradigm by intelligently routing workloads to times and places where electricity is greenest.
Why Carbon-Aware Nomination Is a Game-ChangerCarbon-aware computing isn’t an entirely new idea, but no one has implemented it the way we have. Existing approaches have serious limitations:
Blind Spots in Conventional Cloud Sustainability: Major cloud platforms have introduced sustainability dashboards and efficiency tools to help customers estimate carbon footprints. However, these proprietary solutions require you to trust their reporting. There’s no independent way to verify if a given workload actually ran on low-carbon energy. In other words, you might see carbon-saving claims, but you can’t prove them. While Carbon-Aware Nomination is built for decentralized compute networks, its methodology could be adapted for cloud orchestration. However, the current implementation is blockchain-native, ensuring verifiable, tamper-proof sustainability claims. Decentralized Compute Lacks Climate Intelligence: Emerging networks like iExec, Golem, and Akash distribute workloads across many nodes, leveraging blockchain for compute marketplaces. Yet, they schedule jobs without considering the carbon intensity of those nodes. A task might just as easily run on a coal-powered node as on a solar-powered one. Until now, no decentralized computing platform has integrated real-time carbon optimization into its scheduling logic.This is where Carbon-Aware Nomination stands apart. It combines decentralized computing with live carbon-intensity data for the first time. By doing so, it delivers several unique benefits that neither traditional clouds nor existing blockchain compute projects offer:
Decentralized & Trustless Architecture: Built on the Energy Web X blockchain, every decision and claim is transparent. There’s no single company controlling the process, and sustainability claims are tamper-proof and verifiable by anyone. This brings a new level of trust to green computing — auditors or stakeholders can confirm, via the public ledger, that a workload ran at a time of low grid emissions. Real-Time Carbon Intelligence: The system taps into WattTime’s API and the GSF Carbon-Aware SDK to get live and forecasted grid emissions data. Workloads are continuously matched to the cleanest energy times in real time. If wind picks up in Region A or solar output surges in Region B, the scheduler knows and can route tasks accordingly. We’re not offsetting carbon with credits or averaging it annually; we’re actively avoiding emissions as they happen. Hybrid Public/Enterprise Nodes: Flexibility is built in. Organizations can nominate workloads to run on public decentralized nodes or their own infrastructure — whichever meets the carbon criteria. For example, an enterprise could use its private servers when they’re running on green power, or tap into a public pool of nodes in another region when local power is dirty. Carbon-Aware Nomination isn’t “all or nothing” — it’s a smart overlay that finds the greenest option across a mix of resources. Verifiable ESG Reporting: Every workload handled through this system generates a digital proof of its execution with associated carbon data. Think of it like an eco-receipt. This proof can feed directly into ESG reports or sustainability audits, backed by blockchain records. Instead of saying “we think our computations were low-carbon,” organizations can cryptographically demonstrate it. This level of accountability is increasingly crucial as investors and regulators demand hard evidence of climate action. Open-Source Collaboration: Unlike proprietary cloud solutions, Carbon-Aware Nomination is being released as open-source software. This invites an entire community — from researchers to startups — to use it, audit it, and improve it. By sharing the code, Energy Web ensures transparency of the algorithm and accelerates innovation. Anyone will be able to plug into the system or even contribute new features (for example, supporting new types of workloads or integrating additional data sources). We believe an open approach is the fastest way to standardize carbon-aware computing across the world. How It Works (In Simple Terms): Checks Carbon Data: A group of computers (nominators) analyze real-time electricity grid data to find where energy is greenest. Finds the Best Spot: The nominators select and agree on the cleanest available computing resources to execute the workload and qualify for rewards. Runs & Verifies: The selected computers can accept and execute the workload, and the blockchain records proof of completion, ensuring eligibility for rewards. How It Works (In detail): The Carbon-Aware Nomination PoolSo, how does the system actually orchestrate a “greener” workload? Rather than a central scheduler deciding where tasks run, Carbon-Aware Nomination uses a decentralized pool of worker nodes and nominators that collectively determine the optimal execution plan. Here’s a high-level look at the process:
Live Carbon Data Feeds: Using the Carbon Aware SDK, the system continuously pulls real-time and forecasted carbon intensity data from WattTime’s service. This data covers different regions and grids, updating as conditions change (like when a big solar farm comes online in the afternoon or when a coal plant ramps up at night). Normalized Comparison: Because “100 gCO₂/kWh” means different things on different grids, the scheduler normalizes carbon intensity across regions. This prevents it from always favoring the same region and ensures a fair comparison. In essence, the algorithm knows what “clean” means for each location and time — a form of smart context. Energy Web X Worker Registry: All participating compute nodes (whether run by individuals, companies, or data centers) register on the Energy Web X blockchain registry. They publish metadata about their location, hardware, and efficiency. This on-chain registry is like a directory of available computing resources, with info crucial for carbon-aware decisions. It’s also transparent — anyone can see which nodes are available and where they are. Green Nomination Process: When a workload needs scheduling, a separate group of decentralized nominators, independent from the compute nodes themselves, evaluates the available options. These nominators analyze live carbon intensity data (from step 1), assess each node’s performance and capacity, and rank them based on sustainability. Essentially, the nodes compete, but not on price or speed alone, on verifiable carbon performance. The system then nominates the best-suited, lowest-carbon node to execute the task, ensuring a trustless and tamper-proof selection process. Workload Execution & Proof: The chosen node runs the computation. During and after execution, it logs the energy used and the carbon intensity at that time. This information is reported back and recorded (for example, as an attestation on Energy Web X). The result is a verifiable proof that “Task X was executed at Time Y in Region Z with carbon intensity Q gCO₂/kWh.” If someone doubts the claim, the proof is on the blockchain for anyone to verify.Continuous Optimization: Over time, the system can employ incentives to improve efficiency. For instance, node operators who consistently provide low-carbon compute (by perhaps adding their own renewable energy or load-shifting) could be rewarded. Likewise, if the network notices certain regions becoming cleaner (say a new wind farm installed), it will naturally start shifting more workloads there. The feedback loop encourages the whole ecosystem to move towards cleaner operations, as sustainable nodes get more business.
In practice, this means a company using Carbon-Aware Nomination could submit a batch job and know that the job will run at the best possible time (perhaps an hour later when a green energy surge comes) and in the best location (maybe on a server one country over where it’s a windy night), all without manual intervention. The heavy lifting of “when and where to compute” is handled by the decentralized logic in a transparent way.
How Carbon-Aware Nomination Fits into Energy Web XCarbon-Aware Nomination is a component of the broader Energy Web X (EWX) ecosystem. EWX is Energy Web’s new architecture for decentralized solutions, and it supports multiple “nomination” methods (i.e., scheduling and matching mechanisms) for every single compute workload of their solutions. Carbon awareness is one powerful approach, but not the only one — some applications might optimize purely on cost or latency, for example. Within EWX:
Solutions can opt-in to Carbon-Aware Nomination if minimizing emissions is a priority, or choose other nomination modules better suited to their needs. This flexible architecture means EWX can cater to different preferences (greenest vs. fastest vs. cheapest, etc.), and Carbon-Aware Nomination is available for any solution that cares about sustainability. Even for applications that prioritize performance, Carbon-Aware Nomination can run in the background or as a secondary filter. For instance, if two nodes are equally capable, why not pick the one on cleaner energy? In this way, the carbon-aware mechanism can enhance other scheduling strategies by adding a sustainability lens. All of this happens while leveraging the security and transparency of Energy Web’s blockchain. The nominations (scheduling decisions) and the resulting proofs are recorded on-chain, which aligns with Energy Web’s mission to use open digital infrastructure for the clean energy transition. EWX provides the trust layer that makes Carbon-Aware Nomination’s claims audit-proof. Decentralized Nomination: Trustless, Transparent, and ResilientUnlike a traditional cloud scheduler (where one company’s software decides where your job runs), Carbon-Aware Nomination operates without a single controlling entity. The decision process is distributed among many participants and governed by open algorithms. This decentralized approach brings several advantages:
Trustless operations: You don’t have to trust Energy Web or any cloud provider’s claims — you can verify the outcomes yourself on-chain. If a workload was supposed to run on green power, anyone can check the records and confirm it did. This is crucial for companies that need to report emissions reductions to regulators or want to avoid greenwashing. Transparency: Every step, from the carbon data used to the final selection of a node, can be made transparent. Community members could even watch a dashboard of live nominations happening, seeing in real time how the system is chasing the lowest-carbon resources. This level of openness is unheard of in proprietary cloud scheduling. Resilience: Decentralization also means there’s no single point of failure. The nomination process can continue even if one node or one data feed goes down. Multiple nodes participate in making decisions, and the blockchain ensures a canonical record. It’s much harder to corrupt or game the system — doing so would require attacking a broad, global network of participants.For users, this simply translates to peace of mind. You get a robust service that not only optimizes for sustainability but is also inherently reliable and tamper-proof.
Why Open Source MattersEnergy Web is committed to open-sourcing the entire Carbon-Aware Nomination system. By making it freely available to developers, enterprises, and even competitors, we aim to set a new industry standard for carbon-aware computing. Transparency is a core value here — anyone can inspect the code to understand how decisions are made and suggest improvements. Open source also accelerates innovation: a global community can adapt the tool for new use cases (imagine carbon-aware scheduling for edge devices or for other batch processes like rendering and scientific computing). We’re releasing this under an open license so that this carbon-aware logic can proliferate everywhere, not just within Energy Web’s ecosystem.
Ultimately, climate change is a shared challenge. We believe that by open-sourcing this solution, we enable network effects — more contributions, more adoption, and more emissions saved. No single company can decarbonize IT on its own, but together, we can make carbon-aware computing the “new normal” for all data centers and devices.
Join the MovementCarbon-aware computing is the future. Whether you’re an enterprise managing thousands of servers, a blockchain enthusiast, or a developer hacking on weekends, you can start integrating Carbon-Aware Nomination into your workflows to make a tangible impact. This isn’t just an Energy Web project — it’s a call to all cloud providers, decentralized network operators, and software platforms: join us in running workloads on clean energy. Imagine a world where every AI training, every render job, every transaction validation automatically seeks out the greenest energy available. That’s what we’re building, and we invite you to build it with us.
The code will be open-sourced on Energy Web’s GitHub (and accessible through our developer portal). We’ll be hosting community calls and tutorials for those who want to implement it or contribute. By working together, we can ensure that the digital infrastructure of the future not only powers our economies — but also heals our planet. It’s time to decarbonize the cloud, one workload at a time.
Carbon-Aware Nomination System for Decentralized Computing is now live was originally published in Energy Web on Medium, where people are continuing the conversation by highlighting and responding to this story.
This is the final post in a series drawing on insights from a report authored by We Are Open Co-op (WAO) for the Irish National Digital Leadership Network (NDLN). The report explores the historical roots of credentialing, the emergence of microcredentials, and their potential to reshape education and professional development.
This post draws on the insights explored so far to outline a vision for the future of microcredentials. By focusing on inclusivity, open standards, and meaningful recognition, microcredentials can be a transformative tool for education and professional development. Here, we propose actionable steps for organisations to harness their potential and create systems that truly empower learners.
Part 1 — Introduction and Context Part 2 — The Evolution of Credentialing Part 3 — Demystifying Microcredentials Part 4 — Trends Shaping the Future of Microcredentials Part 5 — The Role of Technology in Microcredentialing Part 6 — Challenges and Risks in Microcredentialing Part 7 — A Vision for the Future of Microcredentials (this post) Open Recognition as a Guiding PrincipleA key aspect of the future of microcredentials is the principle of Open Recognition, which values learning in all its forms, whether formal, informal, or experiential. By embracing this approach, organisations can ensure microcredentials acknowledge diverse achievements and highlight contributions that might otherwise be overlooked.
For instance microcredentials can help:
Charities — Recognise the skills gained through volunteering, advocacy, and community projects, offering participants portable evidence of their contributions. NGOs — Validate the efforts of community workers, promoting inclusion and recognising the expertise developed in challenging environments. Co-ops — Highlight collaborative and informal learning within member-driven structures, supporting collective progress and innovation. Businesses — Align microcredentials with organisational priorities, such as sustainability or diversity, to strengthen employee development and retention. Higher Education — Create stackable credentials that integrate with degree programmes and reflect industry partnerships, enabling more flexible and relevant learning pathways. The Responsible Use of TechnologyTechnology plays a critical role in delivering effective microcredentialing systems. Tools like Open Badges and Verifiable Credentials ensure transparency, portability, and security. Digital wallets give learners control over their achievements, while AI can assist in mapping skills and recommending pathways.
However, implementing these tools requires careful consideration. Equity, accessibility, and data privacy must be central to their design to avoid excluding certain groups or compromising trust. By prioritising interoperability and avoiding proprietary systems, organisations can ensure credentials remain functional across platforms and contexts.
Encouraging Cross-Sector CollaborationThe development of impactful microcredentialing systems relies on partnerships across education providers, employers, policymakers, and technology developers. Collaboration can:
Align microcredentials with societal goals and workforce needs. Establish shared quality assurance frameworks to build trust. Pool resources and expertise to address challenges like accessibility and cost.Such partnerships help ensure that microcredentials are not only useful but also widely recognised and respected.
Embedding Inclusivity and EquityFor microcredentialing systems to achieve their full potential, inclusivity must be at their core. This means designing platforms that are accessible to people with diverse needs, reducing financial barriers, and proactively engaging marginalised communities. Embedding these values ensures microcredentials contribute meaningfully to social equity and support lifelong learning opportunities.
Taking the Next StepMicrocredentials have the potential to celebrate diverse learning experiences, allowing individuals to present their skills and achievements in ways that matter. Whether your organisation is considering a pilot project or a larger-scale initiative, We Are Open Co-op can provide the expertise you need. From strategic planning to technological implementation, we ensure your microcredentialing projects are impactful and learner-focused.
👋 Get in touch with us today to explore how we can support your journey toward a more inclusive and forward-thinking approach to credentialing.
Reframing Recognition: Part 7 was originally published in We Are Open Co-op on Medium, where people are continuing the conversation by highlighting and responding to this story.
The post Driving Value through Innovation: Verified Credential Insights from Cisive PreCheck appeared first on Velocity.
When can you trust a software release? How do you know that a software repo is safe, that it represents the intent of its creators? On the 20th anniversary of Git, these questions are more important than ever.
Obviously, Git lays the foundation for trust in software releases with its ability to sign commits, but the trust of the system is unfortunately shallow. Untrusted content can be merged into a trusted repo, commit histories can be rewritten, and trust can’t be reliably extended into the future. These are crucial issues to solve if we are trusting software that originates in Git … and some pretty crucial software originates with Git, from Microsoft’s vscode or the OpenSSL project to the vue.js framework. Which is what led me to the design of the Open Integrity system. Though Git may look like to the average user like it offers a strong level of trust, it’s a dangerous mirage. Open Integrity makes repo trust a reality.
Open Integrity is still built with Git, meaning that it can be used on GitHub, GitLab, or whatever other Git tool that you prefer. There are no additions required other than the Open Integrity scripts themselves. However, Open Integrity makes trust the default rather than an add-on, establishing a root of trust when a repo is created, defending it against inappropriate additions, and extending that trust to new users and new keys as a project evolves.
Beyond that, Open Integrity’s root of trust can also be used as a DID (decentralized identifier) identity, supporting self-sovereign identity for a user or a project. But taking advantage of that might be the next step. For the moment, let me dive a bit further into the problems of Git’s current trust framework and how I designed the architecture of Open Integrity to resolve them.
A Problem of TrustThe foundation of trust in Git is signing commits with a signing key that is registered with a Git account, but that turns out to be a fragile level of trust because it leaves a number of loopholes.
Signing isn’t required. Even if an account has a legitimate signing key, use of that key isn’t required. Even if a Git hosting service enforces commit signing, unsigned commits can typically still be merged from branches.
Merging doesn’t guarantee signatures. Generally, merging offers one of the biggest gaps in signing security. It’s not just that merged commits can be unsigned, but that a branch can be deleted after merging, leaving no trace as to whether its commits were signed or not.
Things don’t necessarily get better when signing actually occurs.
Repo origin can’t be verified. Though you can verify signed commits belong to the person who currently controls a repo, there’s no way to verify that they haven’t illegitimately taken over the repo since its inception.
Chain of trust functionality is non-existant. On the flipside, there’s no way to show a legitimate transfer of authority between a repo’s originator and its current controller.
Key revocation is weak. Though keys can be manually revoked, there’s no way to automatically do so, and there are no warnings if a revoked key was used for signing.
History can be rewritten. Finally, Git includes a purposeful tool to allow you to rewrite commit history: editing commit messages, rebasing at a large scale, and even removing or changing files! This will change SHA-1 checksums, but as with other dangers here, there’s inadequate messaging to warn of this issue.
Solving the ProblemsSolving Git’s trust issues are relatively easy in theory:
Repo ownership needs to be cryptographically provable back to a repo’s origin. Signing needs to be required for commits and merges alike. Commit history needs to be irrevocable. Strong warnings are needed when these conditions aren’t met.However, nothing is ever that simple in practice. To practically solve the problems required designing an Open Integrity architecture that resolved each of the highlighted problems:
Problem Solution Unverified Origin Inception Commit Unverified Transfer Trust Transition Commit Key Revocation Trust Transition CommitHere’s how each of these architectural elements work:
Inception Commit. An inception commit is the first commit made to a repo. It locks the repo’s inception (initial) key with an empty commit, and so secures the repo with both the SSH signature and a SHA-1 hash that’s hard to purposefully collide because of the minimalism of the commit.
Trust Transition Commit. Any authorized key can be used to generate a trust transition commit, which is another zero-sized commit. Additional, trusted keys can be added, and keys can also be revoked. This allows both keys and team members to change over the course of a project, which is crucial for an open-source or large-scale project. The first trust transition commit is done with the inception key; after that any authorized key can be used.
Improved Logging. Many of the protections of Open Integrity arise from scripts that incorporate extensive logging, ensuring that users can see that all security measures are being met. This is done with Git commit messages and with git note
.
As an example, here is what the commit logging looks like for an inception commit, a trust transition commit, and a key revocation:
Inception Commit:
🔹 Commit: #a3306ef [🏁 Inception Commit] (Signed ✅)
├─ Message: "Initialize repository and establish a SHA-1 root of trust"
├─ Signed by: @a61TkTtL... (🏁 Alice using Device 1 <alice@example.com>)
├─ Empty: true (no files added)
├─ SHA-1 Protection: Constrained content + SSH signature
└─ Verification: Platform-independent
Trust Transition Commit:
🔹 Commit: #b24d9c1 [🔑 New Allowed Commit Signers File] (Signed ✅)
├─ Message: "Added second device key for Alice"
├─ Signed by: @a61TkTtL... (🏁 Alice using Device 1 <alice@example.com>)
└─ New Authorized Commit Signers:
- 🏁 Inception Key explicitly not included for future commits
- + @a61TkTtL... (Alice using Device 1 <alice@example.com>)
- + @f84PmWnY... (Alice using Device 2 <alice@example.com>)
Key Revocation:
🔹 Commit: #c3d7f12 [🔄 Key Rotation: Removed Alice's Second Device] (Signed ✅)
├─ Message: "Revoked second device key"
├─ Signed by: @f84PmWnY... (Alice using Device 2 <alice@example.com>)
├─ Authorized Commit Signers changed:
- 🗑️ @f84PmWnY... (Alice using Device 1 <alice@example.com>) is no longer authorized.
- @f84PmWnY... (Alice using Device 2 <alice@example.com>)
- @b73RkKpQ... (Bob using Work Laptop <bob@example.com>)
- @c58XmWpL... (Charlie using Home PC <charlie@example.com>)
The commits also clearly show when something is done that violates the basic precepts of Open Integrity. For example:
├─ 🚨 ERROR: Commit was signed using a previously revoked key!
Scripted Functions. Logging and the other protections of Open Integrity are available through scripts and one-line “snippet” scripts that can run from aliases. They replace the bare git
functions with ones that ensure that each commit or merge meets all the Open Integrity requirements. For example, here’s a git merge
alias:
git merge feature-branch --verify-signatures \
--require-author-signature \
--require-committer-authorization
Defined Settings. Git allows settings for repos and workflows; securing these settings can improve the security of repos. “Enforcing Signed Commits and Pull Request Requirements in GitHub” describes how GitHub makes use of these settings; similar functionality is available elsewhere in the Git ecosystem.
Data Preservation. Logging, aliases, and settings together ensure the preservation of some data related to a merge. As the following commit shows, information such as an author’s key is preserved when a merge occurs, along with information that the committer verified the original signature.
🔹 Commit: #fa34d76 [🔀 Merge Commit with Verified Author] (Signed ✅)
├─ Message: "Merge feature-branch: Added authentication layer"
├─ Committer: @c58XmWpL... (Charlie using Home PC <charlie@example.com>)
├─ Author: @e83TkLqM... (Eve using Dev Machine <eve@example.com>)
├─ Author Signature: Verified ✓ (signed 2024-02-10T15:30:00Z)
├─ Committer Authorization: Verified ✓ (in allowed_signers since 2024-01-15)
└─ Signatures stored: ./config/verification/signatures
The original author’s signature is not currently preserved due to the complexity of doing so. Working to preserve the signature without having to keep the branch around forever remains on the TODO list for Open Integrity.
Using Open Integrity NowOpen Integrity is available at the OpenIntegrityProject repo. Though it’s still in development, you can get started with it right now.
You should do so if you have a use case for Open Integrity such as:
You’re developing or releasing software through Git that holds sensitive information (such as a digital-asset wallet, a healthcare app, or even a web browser). You’re developing or releasing software through Git that might be used by people or companies that also hold sensitive data (such as financial or medical companies), even if your app does not. You’re developing or releasing software that requires any level of trust from its users. You have a team of developers that changes over time.The docs at the Open Integrity repo include a more extensive Problem Statement that goes beyond what’s in this article, but the most important file might be the setup_git_inception_repo.sh script, which will create a new Open Integrity repository for you. If you want to better understand how it works under the hood, see One Liners, which also documents step-by-step how to setup and use an Open Integrity repo. Future plans can be found in the Project Roadmap.
Give it a try, and if you have questions, please feel free to open an issue or start a conversation.
I’ve been working on Open Integrity for about a year now. I started this work because I thought the promise of trust implicit in Git was extremely important for software design and release, but that it needed improvement. I hope you’ll agree and find the improvements worthwhile!
IEEE P7012, nicknamed MyTerms—much as IEEE 802.11 is nicknamed Wi-Fi—is a standard we expect to go from draft to done later this year.
But that should not stop us from developing for it. Because what it specifies is laid out plainly in its PAR (Project Authorization Request):
This draft standard covers contractual interactions and agreements between individuals and the service providers they engage on a network, including websites.
It describes how individuals, acting as first parties, can proffer their privacy requirements as contractual terms and arrive at agreements recorded and kept by both sides.
These terms shall be chosen from a collection of standard-form agreements in a roster kept by an independent and neutral non-business entity.
Computing devices and software performing as agents for both first and second parties shall engage using any protocol that serves the purpose.
The first party shall point to a preferred agreement, or a set of agreements, from which the second party shall accept one.
Party-to-party negotiations over terms in any of these contracts or other agreements are outside the scope of this standard. If both parties agree, the chosen contract or agreement shall be signed electronically by both parties or their agents.
A matching record shall be kept by both sides in a form that can be retrieved, audited, or disputed, if necessary, at some later time–and which is available to do so easily.*
MyTerms does not specify what tech to use. It just says what needs to happen. This is to leave development as open as possible.
The main thing is that MyTerms obsolesces cookie notices by putting individuals in charge of their privacy requirements—and putting those requirements in the form of contracts, which individuals proffer as first parties.
Never mind that this hardly seems thinkable to the status quo. The same was once said of the Internet, the Web, email, and other free and open graces we take for granted today.
Putting each of us in charge of our privacy online is what makes MyTerms the most important standard in development today. But only if we make it so.
That’s what we’ll be working on tomorrow at VRM Day—and through the following three days at IIW. VRM Day is free, and IIW is cheap as conferences go (and extremely leveraged). Both are at the Computer History Museum in Silicon Valley.
See you there.
*Shall is IEEE-speak for will or must. The purpose is to make clear that it does not mean should, could, or any other modal auxiliary verb.
Google is developing a new feature that will allow secure passkey transfers between Android devices through its Google Password Manager service. The functionality, which is currently under development, aims to simplify the process of moving authentication credentials across devices while maintaining security standards. The development follows Google’s recent enhancements to its Password Manager, including improved security features and user interface updates.
The passkey transfer capability is being integrated into Google Password Manager, with recent releases of Google Play Services containing direct references to passkey export and import tools. These developments are part of Google’s broader effort to enhance authentication security and usability on Android platforms. The initiative comes as enterprise adoption of passkeys continues to grow, according to recent FIDO Alliance research.
Just days after Google confirmed it is bringing its next AI upgrade to Gmail, with major privacy implications, there’s more good and bad news for the 3 billion users relying on Google to deliver secure, spam-free email to their phones and computers. It turns out that a dangerous email attack has operated under the radar for years — until now.
First to the good news. Google’s tightening restrictions on the mass delivery of spam emails to your inbox is working and it’s having a devastating impact on the industry spawned to plague you with marketing messages. “Over the last year,” website MarTech says the industry has seen “engagement rates (open and click rates, especially) drop considerably. Their emails only show up in the inboxes of people already engaging with the brand. For most subscribers, the emails are getting flagged as spam.”
Google’s password manager may soon allow you to transfer your passkeys to a new phone, making their use as a login tool even easier.
An APK teardown by AndroidAuthority has found that Google might be working on a potential update that would allow you to export passkeys from one device to another.
Password export and import is already a key feature of many of the best password managers, but the same functionality for passkeys would be a huge step forward.
This is the sixth in a series of blog posts drawing on insights from a report authored by We Are Open Co-op (WAO) for the Irish National Digital Leadership Network (NDLN). The report explores the historical roots of credentialing, the emergence of microcredentials, and the opportunities they present for reshaping education and professional development.
While microcredentials present exciting opportunities, their implementation comes with challenges that require careful consideration. Issues such as equity, quality assurance, and unintended consequences must be addressed to ensure these systems benefit learners, organisations, and society as a whole. WAO uses the lens of Open Recognition to help address some of these challenges.
Part 1 — Introduction and Context Part 2 — The Evolution of Credentialing Part 3 — Demystifying Microcredentials Part 4 — Trends Shaping the Future of Microcredentials Part 5 — The Role of Technology in Microcredentialing Part 6 — Challenges and Risks in Microcredentialing (this post) Part 7 — A Vision for the Future of Microcredentials Equity and AccessibilityMicrocredentials have the potential to widen participation in education, but they must be designed to include all learners. Organisations need to ensure their systems are accessible to users with diverse needs, including those with disabilities, limited digital literacies, or restricted access to technology.
Financial barriers also pose a challenge. For learners in marginalised communities, the cost of earning and using microcredentials could limit their accessibility. Subsidised or open-access credentialing models can help bridge this gap and make microcredentials a tool for inclusion rather than exclusivity. As can credentialing existing knowledge, skills, and behaviours, in accordance with Recognition of Prior Learning.
Quality Assurance and TrustOne of the most pressing challenges is ensuring that microcredentials maintain credibility and rigour. Open Badges provide a robust digital framework that ensures credentials can be verified and trusted, embedding metadata that describes the skills achieved, the criteria met, and the issuing organisation. This transparency builds trust among stakeholders, enabling credentials to be portable and interoperable across different platforms and contexts.
Image CC BY-ND Visual Thinkery for WAOHowever, quality assurance ultimately rests with the issuing organisation. It is their responsibility to ensure that criteria for earning the credential are clear, the evidence supporting the achievement is credible, and the learning experience is both meaningful and relevant. Aligning credentials with recognised national or international standards further enhances their credibility and utility, ensuring they meet the expectations of both learners and employers.
Avoiding Vendor Lock-InThe reliance on proprietary platforms for issuing and managing microcredentials can create significant risks. As mentioned in a prior post, vendor lock-in limits portability and learner control, undermining the principles of openness and inclusivity.
Adopting open standards, such as Open Badges and Verifiable Credentials, ensures that microcredentials are interoperable across systems. This not only protects learners but also encourages innovation by encouraging collaboration between organisations and technology providers.
Data Privacy and SecurityMicrocredentialing systems rely on digital platforms to issue, store, and share credentials, making data privacy and security critical concerns. Protecting the personal information of learners is both a legal and ethical responsibility.
Open Badges v3 offers solutions by enabling credentials to be controlled directly by learners. Features such as selective sharing ensure that individuals can decide what information to share and with whom. However, care must be taken to avoid systems that lock learners into proprietary platforms, restricting the portability of their achievements.
Unintended ConsequencesMicrocredentials can create new opportunities for recognition, but poorly designed systems may have unintended effects:
Overloading learners with excessive or narrowly focused credentials can lead to “credential fatigue,” where the value of individual achievements is diminished. Issuing credentials for specific skills or behaviours might inadvertently disadvantage those without them, regardless of their actual capabilities. Systems that favour those with more resources, digital literacy, or prior access to credentialing risk perpetuating inequalities rather than addressing them.To avoid these pitfalls, microcredentialing systems should adopt Open Recognition principles, which aim to value diverse experiences and learning pathways. Recognising informal, community-based, and lived experiences alongside formal achievements can help ensure inclusivity.
Sector-Specific ChallengesDifferent sectors face unique challenges when adopting microcredentials:
Charities — Struggle with limited resources to implement digital credentialing systems that are both accessible and effective. NGOs — Face difficulties in ensuring that microcredentials are inclusive for the communities they serve, particularly in low-resource environments. Co-ops — Need to balance member-driven governance models with the technical requirements of interoperable credentialing systems. Businesses — Risk prioritising microcredentials that align only with short-term goals, neglecting broader workforce development needs. Higher Education — Must address resistance from academic departments that may view microcredentials as a threat to traditional degree programmes.Each of these sectors brings its own priorities and constraints to the table, but many of the challenges share common threads. Limited resources, resistance to change, and technical hurdles can all undermine the successful implementation of microcredentials if not carefully managed. Recognising these shared difficulties can help inform more effective, cross-sector strategies.
Practical Steps for Addressing ChallengesTo overcome these challenges, organisations need to adopt thoughtful and inclusive approaches that address both technical and systemic barriers.
Collaborate on shared frameworks that not only ensure technical validity but also assess the clarity of criteria, credibility of evidence, and relevance of learning experiences. These frameworks build trust across institutions and sectors, making microcredentials more valuable and credible. Use open standards like Open Badges and Verifiable Credentials to ensure that microcredentials are portable, secure, and learner-centred. These standards allow credentials to be recognised across platforms and contexts, giving learners more control over their achievements. Focus on recognising diverse forms of learning, including informal, community-based, and experiential achievements. This approach ensures inclusivity and acknowledges the broad spectrum of skills and contributions individuals bring. Continuously evaluate systems for equity and accessibility, addressing barriers such as financial constraints, digital literacy gaps, and platform design issues. By identifying and mitigating these challenges, organisations can expand participation and make microcredentials more inclusive. Looking AheadAddressing these challenges requires thoughtful design, collaboration, and a commitment to inclusivity. By developing systems that prioritise accessibility, quality, and openness, organisations can harness the potential of microcredentials to empower learners and drive meaningful change.
In the final post of this series, we will explore a vision for the future of microcredentials, outlining actionable steps for organisations to integrate them into their strategies effectively.
Reframing Recognition: Part 6 was originally published in We Are Open Co-op on Medium, where people are continuing the conversation by highlighting and responding to this story.
Zug, Switzerland & Oakland, CA — April 3, 2025 — Energy Web, a global nonprofit building open-source Web3 technologies for the energy transition, WattTime, an environmental tech nonprofit providing real-time grid emissions data, and the Green Software Foundation (GSF) today announced the launch of Carbon-Aware Nomination, an open-source system for scheduling computing workloads based on electricity carbon-intensity. This first-of-its-kind solution leverages GSF’s Carbon-Aware SDK, blockchain technology, and live emissions data from WattTime to ensure that applications run when and where power is cleanest, drastically reducing the carbon intensity of computing workloads.
A New Standard for Carbon-Aware Compute
Carbon-Aware Nomination combines Energy Web’s decentralized computing platform with WattTime’s Automated Emissions Reduction (AER) data feeds and GSF’s Carbon-Aware SDK, allowing any batch job or flexible workload to automatically “chase” the lowest-carbon energy available. Instead of running immediately or in a fixed location, a task can be delayed or relocated — for example, executing at night in a region with abundant wind energy — with the entire decision process orchestrated and verified on the Energy Web X blockchain. Unlike traditional cloud computing tools, this system provides cryptographic proof of the carbon emissions caused, giving enterprises and developers unprecedented transparency into the sustainability of their IT operations.
“Our community has been looking for ways to cut IT emissions without sacrificing performance or trust,” said Mani Hagh Sefat, CTO of Energy Web. “Carbon-Aware Nomination delivers that by leveraging decentralization. By partnering with WattTime and integrating GSF’s Carbon-Aware SDK, we’re injecting the best real-time data into an open, trustless network of computing resources. The result is a game-changer — any organization can now ensure its workloads run with the lowest possible carbon impact, and they can prove it. We’re excited to open-source this tool and work with others to scale it up; this is about creating a new norm for green computing across the industry.”
Collaboration Across Climate-Tech Innovators
WattTime’s leadership echoed the significance of this collaboration.
“This partnership demonstrates how data and technology can come together to fight climate change in new domains,” said Gavin McCormick, WattTime co-founder and Executive Director. “Energy Web’s innovative use of our emissions intelligence, coupled with the Green Software Foundation’s Carbon-Aware SDK, ensures that even computing workloads can automatically prioritize clean energy. We’re thrilled to see Automated Emissions Reduction principles expanding into cloud and blockchain infrastructure. By giving organizations the power to run tasks when renewables are abundant, Carbon-Aware Nomination turns climate intention into action.”
The Green Software Foundation (GSF), which has been advancing sustainability in software development, also sees this as an important milestone. “GSF’s mission is to reduce the environmental impact of software, and integrating our Carbon-Aware SDK into decentralized compute systems represents a major step toward that goal,” said Asim Hussain, Executive Director of Green Software Foundation. “By working with Energy Web and WattTime, we’re proving that sustainability can be a core part of modern computing — measurable, verifiable, and open-source.”
An Open-Source Future for Climate-Smart Computing
The Carbon-Aware Nomination system is fully open-source, with documentation and developer tools provided by Energy Web. Companies and developers can integrate it into cloud workflows, decentralized applications, or scheduling software to begin reducing the carbon emissions of their operations. Because the solution runs on a public blockchain, any sustainability claims are transparent and auditable by third parties.
This initiative also aligns with broader efforts by the tech sector to cut emissions: recent studies estimate that data centers and networks account for over 1% of global electricity use, and industry leaders are keen to mitigate this impact.
By launching Carbon-Aware Nomination in collaboration with WattTime and the Green Software Foundation, Energy Web aims to foster an ecosystem of climate-smart computing. The three organizations plan to engage cloud providers, enterprises, and researchers in adopting the approach. This joint effort showcases the power of cross-sector partnerships, bringing together blockchain-based decentralization, real-time environmental intelligence, and software-driven carbon-aware scheduling.
Energy Web, WattTime, and GSF invite interested partners to join the initiative, contribute to the open-source codebase, and collectively drive a new era of sustainable digital infrastructure.
About Energy Web
Energy Web is a software company developing open-source technologies to accelerate the energy transition. Its decentralized solutions leverage blockchain to create innovative market mechanisms, empowering energy companies, grid operators, and businesses worldwide. (www.energyweb.org)
About WattTime
WattTime is an environmental tech nonprofit that empowers all people, companies, policymakers, and countries to slash emissions and choose cleaner energy. Founded by UC Berkeley researchers, we develop data-driven tools and policies that increase environmental and social good. During the energy transition from a fossil-fueled past to a zero-carbon future, WattTime ‘bends the curve’ of emissions reductions to realize deeper, faster benefits for people and planet. Learn more at www.WattTime.org.
The Green Software Foundation (GSF) is a nonprofit organization under the Linux Foundation. It aims to create a trusted ecosystem of people, standards, tooling, and best practices for building green software and hardware. Members of the GSF represent a balanced mix of for-profits, nonprofits, and academia from around the globe and include several Fortune Global 500 firms. The Foundation operates by consensus.
Three Working Groups, including Software Standards, Hardware Standards, and Policy, and two Committees, Green AI and Developer Relations, currently oversee the Foundation’s ongoing projects. (www.greensoftware.foundation)
Energy Web, WattTime, and Green Software Foundation Unveil Open-Source Carbon-Aware Nomination to… was originally published in Energy Web on Medium, where people are continuing the conversation by highlighting and responding to this story.
April 2025
DIF Website | DIF Mailing Lists | Meeting Recording Archive
Table of contents Decentralized Identity Foundation News; 2. Working Group Updates; 3 Special Interest Group Updates; 4 User Group Updates; 5. Announcements; 6. Community Events; 7. DIF Member Spotlights; 8. Get involved! Join DIF 🚀 Decentralized Identity Foundation News Global Digital Collaboration Summit Announced: Registration Now Open for July Geneva ConferenceThe Global Digital Collaboration, a new partnership of intergovernmental, standards, and open source organizations, has launched its official member registration websites to advance public-private cooperation on digital infrastructure following the Global Digital Compact. The inaugural conference will be held July 1-2 in Geneva with space for 1,750 attendees.
Register your interest in attending as a DIF community member. We expect to finalize the attendees by May 1.
Those who pre-registered will be automatically migrated to the new site without needing to complete the registration form again.
GC25 Tickets via DIF · Luma Join the Global Digital Collaboration on Wallets & Credentials Be part of a landmark multistakeholder initiative convened by IGOs, SDOs, OSOs, and NGOs to… Kim Duffy It's a Wrap for DIF Labs Beta CohortDIF Labs concluded its inaugural Beta Cohort program with a showcase featuring three groundbreaking decentralized identity projects tackling pressing challenges at the intersection of privacy, trust, and digital authenticity. The standout projects included Veranon (enabling private personhood verification), Linked Claims (building trust networks for credentials in an AI-dominated world), and Ordinals Plus (connecting Bitcoin ordinals with decentralized identity). With Beta Cohort 2 applications opening in late April and the program kicking off May 20, DIF Labs continues its mission of supporting human-centric identity solutions that preserve authenticity in our digital landscape.
DIF Labs Beta Cohort Show & Tell Conclusion | DIF Labs science DIF Labs DIDComm and Cloud Data EncryptionThe DIDComm Working Group recently published a timely solution for personal data protection in light of Apple's decision to stop offering end-to-end encrypted iCloud storage in the UK. Their article explains how users can maintain privacy by implementing DIDComm encryption before uploading files to cloud services, effectively creating a user-controlled security layer that even cloud providers cannot access. The post highlights practical implementation methods and resources for those interested in this open-source approach to data sovereignty.
Read more:
How to keep user data private while storing it in the cloud By layering End-to-End Encryption (E2EE) on cloud storage services ourselves, we can ensure that our data is protected from unwanted access, even from the storage services themselves. Decentralized Identity Foundation - BlogWorking Groups 🛠️ Working Group UpdatesBrowse our working groups here
Creator Assertions Working GroupThe group discussed the evolution of their work with institutional news media and individual content creators, focusing on the identity claims aggregator model. They explored challenges in documenting assurance levels for identity signals and considered a version of the specification for content creators with autonomous credentials.
DID Methods Working GroupThe group conducted a deep dive on the DID Key specification, discussing its maturity, supported key types, and potential improvements. They are working on establishing criteria for recommending certain DID methods and implementing a review process to evaluate which methods are mature enough for standardization, with particular attention to interoperability and adherence to DID core standards.
Identifiers and Discovery Working GroupThe team focused on two main areas: implementing a well-known DID configuration for DIF to link DIDs to domain names, and finalizing version 1.0 of the did:webvh specification. They made progress on implementing watcher features for long-lasting DIDs and resolved issues related to metadata, error codes, and DNS-based identity verification systems. The specification will soon be released as version 1.0.
🪪 Claims & Credentials Working GroupThe group worked on developing a standardized repository for credential schemas in the decentralized identity space. They explored aspects like key validation, long-term availability of DID documents, and URL operations. They're creating comparative tools for different schema approaches and building a structured organization with community drafts, working group approved, and DIF recommended categories.
Applied Crypto Working GroupThe team made progress on several cryptographic aspects of BBS+, including pseudonym generation and Sigma protocols. They discussed the potential adoption of their specification by CFRG, exploring issues around privacy concerns and encoding formats. The European wallets definition now includes a section on zero-knowledge proofs that references BBS and BBS+ signatures, representing significant adoption of their work.
DIF Labs Working GroupDIF Labs is preparing for its next cohort.
DIDComm Working GroupDiscussions centered around the Trust Spanning Protocol, potential improvements to DIDComm, and the pros and cons of different encoding methods like CBOR. The team also explored ideas for a user-friendly notification system and addressed issues with connection reuse, concluding with plans for future meetings to further discuss these topics.
If you are interested in participating in any of the Working Groups highlighted above, or any of DIF's other Working Groups, please click join DIF.
🌎 DIF Special Interest Group UpdatesBrowse our special interest groups here
The H&T SIG has made substantial progress on their HATPro traveler profile specification, focusing on standardization for names across different cultures and languages while addressing challenges in data transfer protocols. The group refined their implementation guide to include digital wallets and agent interactions, and explored decentralized identity concepts with guest presenters from Condatis showcasing practical applications. Work continues on schema development and integrating verifiable credentials terminology, with thoughtful discussions on balancing legacy systems compatibility with modern data sovereignty principles.
DIF China SIG APAC/ASEAN Discussion GroupAchraf and Subash from OCBC Bank's blockchain team presented their experience with blockchain, self-sovereign identity, and verifiable credentials. They demonstrated the implementation of a cross-authentication system for multiple bank tenants and the creation of a publicly verifiable claims system using Verifiable Credentials technology. The presentation highlighted a practical financial industry application of decentralized identity technologies.
DIF Africa SIGThe monthly gathering featured a presentation by Anushka Soma-Patel on decentralized identity, digital trust ecosystems, and the Ayra network. The discussion covered data privacy challenges, verifiable credentials, and governance in digital trust systems. The Ayra network was presented as a platform to enable safe, secure, interoperable, and sustainable digital trust ecosystems.
DIF Japan SIGThe group held a session on KERI (Key Event Receipt Infrastructure), an open-source decentralized identity system that offers a blockchain-independent autonomous model. The presentation included a demonstration of verifiable credential issuance using KERI, explaining its key event logs, witness system, and ACDC credential format. Members discussed potential applications and the current maturity of development tools.
DIF Korea SIG 📖 DIF User Group UpdatesThe group explored enhancing DIDComm with two-factor authentication for SSH, discussing technical implementations and user experience. They proposed improvements to the routing protocol notification system and debated the use of lower trust keys for user-friendly notifications. The team also addressed connection reuse challenges and evaluated encoding methods like Sibor versus Caesar for future versions of DIDComm.
Veramo User Group 📢 Announcements at DIFConference season is kicking into high gear, and Internet Identity Workshop 40 is next week! Explore our Events calendar to meet the DIF community at leading Decentralized Identity, Identity, and Decentralized Web events.
🗓️ ️DIF Members👉Are you a DIF member with news to share? Email us at communication@identity.foundation with details.
🆔 Join DIF!If you would like to get in touch with us or become a member of the DIF community, please visit our website or follow our channels:
Follow us on Twitter/X
Join us on GitHub
Subscribe on YouTube
🔍
Read the DIF blog
New Member OrientationsIf you are new to DIF join us for our upcoming new member orientations. Find more information on DIF’s slack or contact us at community@identity.foundation if you need more information.
This is the fifth in a series of blog posts drawing on insights from a report authored by We Are Open Co-op (WAO) for the Irish National Digital Leadership Network (NDLN). The report explores the historical roots of credentialing, the emergence of microcredentials, and the opportunities they present for reshaping education and professional development in line with the concept of Open Recognition.
Technology plays a key role in the development and delivery of microcredentials, enabling their portability, transparency, and scalability. By making use of digital tools built on open standards, organisations can ensure that microcredentials are secure, meaningful, and accessible to learners across sectors.
Part 1 — Introduction and Context Part 2 — The Evolution of Credentialing Part 3 — Demystifying Microcredentials Part 4 — Trends Shaping the Future of Microcredentials Part 5 — The Role of Technology in Microcredentialing (this post) Part 6 — Challenges and Risks in Microcredentialing Part 7 — A Vision for the Future of Microcredentials Open Standards: Making Credentials Portable and TrustedOne of the key technological foundations for microcredentials is the use of open standards. Open standards ensure that credentials can be shared and recognised across different systems, making them more portable and trustworthy.
Open Badges, first introduced in 2011, set the stage for this revolution. These digital credentials provide important information, such as what the learner achieved, who issued the badge, and the criteria they met. The latest version, Open Badges v3, goes even further by aligning with a global framework called Verifiable Credentials.
This update brings several improvements:
Learner Control — Credentials can now be managed by the individual rather than being tied to a single institution or platform. No Mandatory Image — While the badge image remains a useful visual, the most important feature is the detailed information embedded in the credential, which helps employers and others verify it easily. Rich Information — Microcredentials can now include deeper context about what was achieved, such as specific skills, the learning process, and connections to recognised frameworks or standards.These advancements make microcredentials more meaningful and usable, not just in local settings but internationally as well.
Digital Wallets: Empowering LearnersDigital wallets, sometimes referred to as ‘credential management’ apps, are a transformative tool for managing microcredentials. By storing credentials in a secure, user-controlled environment on their mobile device, learners gain full ownership of their achievements. This flexibility allows individuals to share only the information needed for specific opportunities, such as job applications or further study, while maintaining privacy.
For example, a learner could use their digital wallet to share a single credential with a potential employer, without needing to reveal unrelated personal details such as their age or gender. This approach protects privacy and gives learners greater control over how their achievements are presented.
Digital wallets also facilitate the portability of microcredentials, ensuring that they can be recognised across different contexts. For organisations, adopting digital wallet-compatible credentials demonstrates a commitment to learner-centric practices and enhances the usability of their offerings.
Sector-Specific ApplicationsTechnology enables microcredentials to meet the unique needs of different sectors, enhancing their impact and relevance:
Charities — Use digital badges to recognise volunteer contributions, fundraising efforts, and advocacy skills, providing participants with evidence of their impact. NGOs — Leverage open standards to validate community workers’ skills and promote inclusivity in recognition practices. Co-ops — Employ interoperable credentials to highlight informal learning and collaborative problem-solving within member-driven environments. Businesses — Integrate microcredentials with existing professional development platforms to enhance employee upskilling and retention, focusing on sustainability or diversity goals. Higher Education — Adopt technologies like digital wallets to make credentials earned through co-curricular activities or industry collaborations more accessible and portable for students. Balancing Innovation with ResponsibilityTechnology offers new possibilities for recognising and showcasing skills, but its use must be carefully planned to avoid potential pitfalls. Challenges such as over-reliance on specific vendors, risks to data security, and barriers to access can undermine the effectiveness and fairness of microcredentialing systems.
To address these concerns:
Adopt open-source and interoperable solutions — These reduce the risk of being locked into a single provider’s platform, ensuring flexibility and long-term sustainability. Focus on accessibility — Systems should be designed to include users with a wide range of needs, enabling participation from all, regardless of ability or circumstance. Prioritise data security and privacy — Organisations must implement robust measures to protect sensitive user information, ensuring compliance with relevant regulations and building trust among learners and stakeholders.By taking a responsible approach, organisations can ensure that their microcredentialing efforts are not only innovative but also equitable, secure, and adaptable for the future.
Looking AheadTechnology is more than just a way to expand microcredentials — it is reshaping how learning is recognised and valued. By using approaches built on open standards, organisations can create microcredentialing systems that are secure, accessible to everyone, and focused on the needs of learners.
In the next post, we will explore the challenges and risks associated with microcredentialing, including how to address equity, quality assurance, and unintended consequences.
Reframing Recognition: Part 5 was originally published in We Are Open Co-op on Medium, where people are continuing the conversation by highlighting and responding to this story.
Specialists from different fields in the digital identity ecosystem have shed light on what it takes to be accredited for New Zealand’s Digital Identity Services Trust Framework (DISTF). They shared their thoughts during a webinar on March 26 moderated by the Executive Director of Digital Identity New Zealand, Colin Wallis, during which they also explained details about the process and the benefits that come with the accreditation.
The DISTF in New Zealand is a legal framework designed to regulate digital identity services as the country looks to expand its digital ID services. New rules and accreditation system of the DISTF took effect last November.
The objective of the webinar dubbed “DISTF Evaluators Showcase” was to provide an opportunity for those considering a DISTF accreditation to have an understanding of its advantages for small and large scale entities, organizational roles, costs and time frames, and the preparation process for the evaluation of compliance with standards which precedes the accreditation application.
Discussants included security consultant at Middleware Group, Tom Norcliffe; Director of Cyber, Privacy and Resilience at Deloitte which is an accredited evaluator, Marcus Bossert; and Founder of Cianaa Technologies, Rizwan Ahmad, who are all Digital Identity New Zealand members. There was also the Regulatory Practice Manager of New Zealand’s Department of Internal Affairs, Deanne Myers.
The first three speakers took time off to make an introduction of their companies, highlighting their services and key projects across the domains of digital identity and cybersecurity. They also mentioned some of their projects in New Zealand and the institutions they work with.
Bossert, for a start, explained the work Deloitte as a consultancy services firm is doing in the digital security assessment space. He mentioned that the firm offers services in cyber strategy, transformation, digital privacy, trust, and enterprise security, application cloud security, emerging technology solutions, threat detection and response, as well as operational security services. He also said that they have worked extensively on digital identity projects.
He said evaluation for the DISTF accreditation process looks at several factors including enterprise security, cloud security, security and resilience mechanisms, and threat detection and response, among others. The official also explained that the firm plays a significant role in guiding organizations through the accreditation process, helping them to align their cybersecurity and privacy expertise in order to ensure successful transactions.
Accreditation not a mere compliance formalityThe accreditation, he insisted, is not just for the purposes of compliances but is something that enables entities demonstrate robust security practices which are vital for building confidence and trust among stakeholders such as boards, customers, and even regulators.
“If you think about why you want to get accredited, you’ve got to think about the stakeholders that are involved in this. But fundamentally, you have to think about it from a more practical and operational perspective as well,” Bossert said.
“Security, privacy, development and operations teams are really interested in knowing that you have solid security practices built in. It is quite valuable for them to understand and have clarity on what the control measures are. So, I see the accreditation process as a mechanism to build confidence that your stakeholders need,” he stated.
“[Thanks to] the work that we do, our knowledge and global network, we can help you accelerate readiness and navigate the shortest path to success, so that we help you focus on those things that really matter, get your accreditation and accelerate operational readiness.”
For his part, Norcliffe from Middleware Group emphasized the importance of the digital ID trust framework, saying it is crucial for most of the work they are doing with government entities and the private sector in New Zealand.
Taking the floor, Ahmad said Cianaa Technologies has a framework on which their team of independent security evaluators offer services which include penetration testing, privacy, and GDPR compliance.
“We assess organizations based on this [framework]. We see whether you’re keeping the information confidential, whether you have the integrity intact, whether your services are available, whether it has non-repetition, and if it has the proper authentication authorization,” he said.
“Now, when we assess your organization based on that, it actually automatically gives up to the right assessment, because if something is missing, then there’s something missing in security.”
Overview of accreditation application processMyers said the team she manages is responsible for basically all aspects of the accreditation process. She gave an overview of components of the application process required by the Trust Framework Authority (TFA), noting that the entire process is transparent.
“We receive and assess applications for accreditation as a trust framework provider. We monitor ongoing compliance with the requirements of accreditation, assess applications for renewal, and obviously deal with any issues that arise in the course of these processes.”
She explained that a part of the application process requires results of an independent evaluation undertaken by independent evaluators, including a conformance assessment against the New Zealand identification standards. “Those outputs or deliverables will be submitted as part of an application,” she stated.
“There are currently 15 independent security evaluators who have been appointed and three privacy independent evaluators. However, we are currently undertaking an expression of interest process, calling for interest from other agencies who are able and who meet the criteria to be appointed as either a privacy or security evaluator, or both.”
Myers also shared important links and resources which can help those seeking accreditation to better understand what is required of them and how they can go through the process successfully. She also said the application would need to be submitted within 12 months of the standards compliance evaluation.
Understand what you needThe speakers also noted the place of AI in the trust framework evaluation process. Ahmad said while the technology can make a positive contribution to the process, it can also bring about challenges that effect these assessments.
Bossert added that those applying for accreditation must clearly understand what they need, and often, they expect the process to be as quick, painless and cost-effective as possible.
“If you’ve ever done something like an ISO accreditation, you would understand or know that it is very useful to be clear on the scope of accreditation that you’re looking for. So don’t apply for accreditation for areas that you don’t need. For example, if you’re not going to be providing personal information, then don’t sign up for accreditation for that,” he advised.
He also said for evaluators to have their work made easy, those seeking to provide trust services must take certain key factors into consideration, including having the right control and risk management mechanisms in place.
“I would suggest that you have a look at your solution and do a proper risk assessment early on so that you’ve got visibility of those risks and that you can start building in the necessary controls. In terms of risks, if you can demonstrate that you’re actively managing them and you have visibility and control, then that certainly helps to give the confidence needed.”
The aspect of pricing for the evaluation process was also addressed, with the speakers saying the process can range, approximately, from between $10,000 and $50,000, depending on the scope and the level of preparedness of the entity to be evaluated.
Source: biometricupdate.com
The post Experts highlight advantages of New Zealand’s Digital ID Services Trust Framework accreditation appeared first on Digital Identity New Zealand.
Join our community call on April 15th where we will discuss how to strengthen cybersecurity for civil society organizations.
The post Community Call: How to approach cybersecurity in turbulent times appeared first on The Engine Room.
The post Transforming Communications: The Zoom Phone Project at Stevens Institute of Technology appeared first on NJEdge Inc.
Early on, Patricia Morreale, Ph.D., knew she was drawn to a career in science or engineering. With an early exposure to problem-solving through math, science, and the hands-on experience of taking things apart, her interest in these subjects continued to grow. Later during college, her exposure to computer science set her on a path to becoming a leader in the field, a journey that would take her from a curious student to Professor and Chair, Department of Computer Science and Technology, Hennings College of Science, Mathematics, and Technology, at Kean University.
Unlocking Opportunities through Computer Science
During a university summer program following her junior year of high school, Morreale had the chance to explore a programming language and how to apply it to solve problems. “I was fascinated by the idea that I could instruct machines to perform tasks, and this sparked a new interest in programming and engineering,” shares Morreale. “I realized that computer science was at the core of nearly every engineering field, and I liked the idea of being involved in something so fundamental to technology. At the time, I wasn’t sure what specific career I wanted, but I knew I wanted something versatile and portable. Computer science, with its broad applications and adaptability, stood out as the perfect choice.”
As she often tells her students, the beauty of computer science lies in its portability. “You can get off a plane anywhere and they’re going to need you,” says Morreale. “Whether working remotely or joining a local firm, computer science skills are in constant demand. Unlike professions that require specific licensing, computer science offers flexibility and relevance in today’s world. In fact, with the increasing reliance on technology in everyday life, understanding how computers work has become an essential skill for managing the modern world.”
After receiving her B.S. in computer science from Northwestern University, Morreale worked in the industry for several years in network design where she helped build large-distributed networks that went from coast to coast. “During this time, I returned to graduate school in hopes of opening doors to new opportunities,” shares Morreale. “I wanted more control over the work I was doing and the ability to pursue my own interests, so I took a year off to complete my Ph.D. residency, eventually transitioning into academia with positions at Northeastern Illinois University, and later, Stevens Institute of Technology. Having credentials that are portable allows me the flexibility to explore different opportunities and meet the needs of an ever-changing industry.”
In her transition from industry to academia, Morreale says having colleagues who have also worked in the industry provides a shared understanding of organizational goals, time management, and return on investment that enhances their ability to lead effectively. “My industry experience has been crucial in managing people who are results-driven and focused on efficiency. I’m able to speak to corporate partners, write proposals, and apply a big picture perspective to smaller tasks. We also have a commitment to student success that parallels the industry mindset—viewing students as valuable assets to our department and whose development is key to their broader impact in the world.”
Promoting Interdisciplinary Conversations
Making significant contributions through her research in machine learning and network systems, Morreale has helped drive innovations in error detection and secure processing, which have been patented and brought to market. “My earlier work in network systems and machine learning is a result of partnering with a colleague at Argonne National Laboratories and our interest in detecting degradation in a network environment,” explains Morreale. “By detecting early signs of degradation, we could intervene before a complete collapse or failure occurred, sampling and testing indicators to address issues proactively rather than waiting for a hard failure.”
“These solutions resulted from collaboration and casual conversations among researchers and showed the importance of regularly discussing topics, whether via email, at conferences, or in the hallway,” continues Morreale. “Cross-pollination and interdisciplinary conversation is highly important to overcoming challenges and finding impactful solutions. Talking to different people who all bring unique experiences and perspectives helps generate innovative ideas and leads to breakthroughs that wouldn’t be possible in isolation. So many different fields are impacting computer science—whether it’s artificial intelligence (AI) being used for biological discovery and new drug development or computational methods applied to large and ambitious problems. By working across disciplines, we not only solve technical problems but also address real-world challenges with solutions that are more comprehensive, effective, and far-reaching.”
“I’ve learned many wonderful techniques from NCWIT to encourage students to consider technology careers and find computing experiences that align with their interests. This organization also led me to work with the Computing Alliance of Hispanic-Serving Institutions (CAHSI), which is more than eighty colleges and universities in the United States working together to recruit, retain, and advance students in computing. Working collectively with other institutions who want to empower all students has been very beneficial and has created a larger research and education community.”
— Patricia Morreale, Ph.D.
Professor and Chair, Department of Computer Science and Technology, Hennings College of Science, Mathematics, and Technology,
Kean University
Instilling Computational Thinking
Along with facilitating interdisciplinary collaboration, Morreale is dedicated to expanding participation in computer science by promoting faculty development and encouraging undergraduate students to engage in research opportunities. “I encourage my students from all backgrounds to see themselves as part of the tech world,” says Morreale. “Through outreach programs and campus tours, we create an inclusive environment at Kean where students are reminded, ‘You can be here.’ Whether through K-12 initiatives or conversations with prospective students, I want to make sure they recognize that everyone, has a place in computing and that this discipline is fundamental to our day-to-day lives. We aim to flatten access to technology and demonstrate that with the right support, anyone can reach their goals.”
To help improve student success, Morreale says we need to reach students earlier in their education. “As technology and knowledge evolve, many of the topics that were once reserved for graduate courses—such as artificial intelligence, machine learning, and data visualization—are now part of undergraduate and high school curricula. This shift reflects the fact that K-12 students are entering college with a stronger foundation in areas like cybersecurity and the basics of AI, including skills like shaping search criteria and using tools like ChatGPT. These tools are advancing rapidly, and academic institutions need to be prepared to meet students where they are in their understanding.”
“Looking back about ten to fifteen years ago, there was a focus on instilling computational thinking in K-12 students—teaching them to think sequentially, with an understanding of what comes first, second, and third to solve a problem,” continues Morreale. “Today, many students are already coming to college equipped with this kind of thinking, showing that early exposure to computational concepts is paying off. To continue improving student success, it’s crucial to build on this foundation and ensure that higher education institutions are ready to challenge and support these students as they advance in their knowledge of technology and its applications.”
To promote AI and advanced technology education earlier, Kean is making professional development a priority to help faculty be well-equipped to mentor incoming students. “The research topics that today’s undergraduates are ready to take on are even more advanced, so we must ensure we have a sophisticated faculty mentoring program to help our new student researchers build their own ideas, work collaboratively, understand constructive criticism, and manage a project from start to finish.”
As a member of the Council of Undergraduate Research (CUR), Kean is able to access valuable resources and networking opportunities that help support undergraduate research and strengthen the institution’s research presence. “To cultivate the next generation of scientists and researchers, we must invest in undergraduate students, providing them with opportunities to engage in research and develop their skills early on,” says Morreale. “In working with CUR, we can identify and mentor young talent, empowering them to create research posters, write papers, and pursue advanced degrees. I’ve been very fortunate to have had many great students who have gone on to amazing careers, continued to graduate school or became teachers who are now shaping the next generation of students.”
Morreale is involved with the NCWIT, which aims to increase participation in the field of computing. “I’ve learned many wonderful techniques from NCWIT to encourage students to consider technology careers and find computing experiences that align with their interests,” shares Morreale. “This organization also led me to work with the Computing Alliance of Hispanic-Serving Institutions (CAHSI), which is more than eighty colleges and universities in the United States working together to recruit, retain, and advance students in computing. Working collectively with other institutions who want to empower all students has been very beneficial and has created a larger research and education community.”
“Edge fosters a community of scholars and professionals who exchange ideas and collaborate, which is essential for driving impactful research and innovative solutions. This community-building is vital in creating partnerships that can lead to significant advancements. Looking at the history of New Jersey, the state has always thrived on strong partnerships between universities, researchers, and industry leaders, joining together to tackle large-scale challenges. New Jersey’s geographical and cultural diversity further enriches this environment and makes it an ideal place for impactful collaborations to flourish.”
— Patricia Morreale, Ph.D.
Professor and Chair, Department of Computer Science and Technology, Hennings College of Science, Mathematics, and Technology,
Kean University
Supporting Local Workforce Development
To help retain graduates within the state, many institutions are focusing on developing strategies that aim to keep skills within the local workforce rather than seeing professionals take their talents elsewhere. “New Jersey has an outstanding community college and higher education system,” says Morreale. “We want to make sure there are plenty of opportunities for our students within our region to excel in the workforce. The New Jersey AI Hub provides a remarkable opportunity for collaboration between industry and university, and I think our state can lead the nation in supporting workforce development and advancing research and innovation.”
Building a strong support system is essential for engaging students in meaningful research opportunities, significantly enhancing their academic experience and contributing to a positive career trajectory. “We want to demystify the traditional idea of what a research career looks like and show students that research can take many forms and offer diverse career paths in various fields,” explains Morreale. “We begin by introducing topics of interest and research in our classes and encouraging students to meet with us outside of class. Faculty members also inform students about academic careers and the rewards they can offer, which has proven helpful. For example, when a conference paper is accepted, the excitement of presenting it at a prestigious conference, sometimes in fascinating locations, can be a thrilling experience for students.”
Morreale adds, “We also highlight the various rewards academic work can bring, such as having your name published or securing research funding. Many careers lack direct feedback, making it difficult to know if your contributions are making a significant impact. In academia, however, there’s more immediate recognition—students can see the results of their work, whether through published papers or funded grants. I want to make sure I emphasize these successes and explain the process behind them. Kean also provides summer support for faculty-student researcher teams that helps develop critical skills, foster innovation, and provide valuable hands-on experience in collaborative environments.”
Expanding Advanced Technology Education
Kean University is advancing its educational offerings with the introduction of two innovative programs—a Ph.D. in Computer Science and a Bachelor of Science in Artificial Intelligence. These programs aim to solidify Kean’s role as a leader in technological education, catering to the growing needs of both local and global industries. The Kean Board of Trustees approved both programs in December 2024 and they are now in review by the New Jersey President’s Council Academic Issues Committee, with the potential for the AI program to accept students as early as Fall 2025 and the Ph.D. program to follow soon after.
The proposed Ph.D. in Computer Science will focus on key areas such as artificial intelligence, cybersecurity, and data science, aimed at developing the next generation of technology leaders for careers in research, academia, and industry. Similarly, the Bachelor of Science in Artificial Intelligence program will launch with an initial cohort of 25 students, providing graduates with the expertise needed to succeed in the rapidly expanding global AI sector, which is expected to grow to $407 billion by 2027. This program aligns with New Jersey’s efforts to promote AI innovation, investment, and development within the state.
“Institutions like Kean can play a significant role in ensuring that all students are equipped with these crucial ethical skills and determining ways AI can be incorporated into general education. In a landscape where numerous AI tools compete for attention, the goal is to guide students forward responsibly, ensuring they not only keep pace with technological advances but also use them in an ethically sound and meaningful manner.”
“As technology and knowledge evolve, many of the topics that were once reserved for graduate courses—such as artificial intelligence, machine learning, and data visualization—are now part of undergraduate and high school curricula. This shift reflects the fact that K-12 students are entering college with a stronger foundation in areas like cybersecurity and the basics of AI, including skills like shaping search criteria and using tools like ChatGPT. These tools are advancing rapidly, and academic institutions need to be prepared to meet students where they are in their understanding.”
— Patricia Morreale, Ph.D.
Professor and Chair, Department of Computer Science and Technology, Hennings College of Science, Mathematics, and Technology,
Kean University
In looking toward the future, computing is becoming more responsive to individuals with advancements in AI, personalized systems, and adaptive technologies that cater to users’ unique preferences and requirements. “As computing becomes increasingly tailored to the individual, it opens up new opportunities for innovation, accessibility, and solutions that can significantly improve everyday life and work,” says Morreale. “Computer science and education research play a crucial role in how people learn, especially since many courses are delivered online or have a digital component,” continues Morreale. “As the pedagogical process evolves, it’s important to examine and integrate areas such as ethics. We have an ongoing project in collaboration with other U.S. schools, funded by the National Science Foundation (NSF) and CAHSI, where ethics modules are being incorporated into computer science courses. I’m working with an ethicist from Kean’s Department of History, and we aim to provide students with the tools to examine AI-generated results critically, understand when AI might “hallucinate,” and determine when it is appropriate—or inappropriate—to use AI technologies.”
While ethics is already part of the curriculum, Morreale emphasizes the need for a deeper focus on how students engage with AI and cite these tools properly. “Institutions like Kean can play a significant role in ensuring that all students are equipped with these crucial ethical skills and determining ways AI can be incorporated into general education. In a landscape where numerous AI tools compete for attention, the goal is to guide students forward responsibly, ensuring they not only keep pace with technological advances but also use them in an ethically sound and meaningful manner.”
Fostering a Supportive Research Community
Edge is dedicated to supporting academic leaders and experts in their mission to drive innovative research and foster transformative initiatives, particularly in the fields of computer science and education. “At Edge, we are committed to supporting academic leaders and experts like Dr. Patricia Morreale in their efforts to advance innovative research, equitable education, and transformative initiatives in computer science,” says Dr. Forough Ghahramani, Assistant Vice President for Research, Innovation, and Sponsored Programs, Edge. “Her leadership in human-centered computing, responsible AI, and broadening diversity in the field exemplifies the kind of impactful work that drives progress and creates opportunities for future generations.”
“Cross-pollination and interdisciplinary conversation is highly important to overcoming challenges and finding impactful solutions. Talking to different people who all bring unique experiences and perspectives helps generate innovative ideas and leads to breakthroughs that wouldn’t be possible in isolation. So many different fields are impacting computer science—whether it’s artificial intelligence (AI) being used for biological discovery and new drug development or computational methods applied to large and ambitious problems. By working across disciplines, we not only solve technical problems but also address real-world challenges with solutions that are more comprehensive, effective, and far-reaching.”
— Patricia Morreale, Ph.D.
Professor and Chair, Department of Computer Science and Technology, Hennings College of Science, Mathematics, and Technology,
Kean University
In exploring ways to enhance the research and educational capabilities of institutions like Kean, Morreale says partnerships with organizations like Edge help ensure that all participants have access to the necessary computing resources to advance their work. “Through collaboration opportunities, advanced networks, and grant assistance, Edge helps address the challenges that arise when individuals lack reliable internet access or are hesitant to trust digital technologies. These issues are central to promoting an equitable environment where everyone can engage confidently with technology, regardless of background or resources.”
“Edge fosters a community of scholars and professionals who exchange ideas and collaborate, which is essential for driving impactful research and innovative solutions,” continues Morreale. “This community-building is vital in creating partnerships that can lead to significant advancements. Looking at the history of New Jersey, the state has always thrived on strong partnerships between universities, researchers, and industry leaders, joining together to tackle large-scale challenges. New Jersey’s geographical and cultural diversity further enriches this environment and makes it an ideal place for impactful collaborations to flourish.”
The post Leading the Way in Advanced Technology Education and Innovation appeared first on NJEdge Inc.
HYPR, an Identity Assurance Company, has released the fifth edition of its ‘State of Passwordless Identity Assurance Report.’
The report reveals an increasing misalignment between real-world security risks and outdated authentication methods.
It also highlights the growing risks associated with outdated authentication methods and the rise of new generative AI-related attacks.
However, the report signals a potential turning point in the fight against identity-based attacks, with phishing-resistant authentication methods like FIDO passkeys poised to become the dominant solution within the next two years.
The company states that this is a first in the report’s five-year history.
This device-independent, FIDO Alliance-certified biometric authentication solution helps organizations mitigate the risk of one of workforce security’s most crucial concerns: account takeover.
iProov today launched iProov Workforce MFA.
This device-independent, FIDO Alliance-certified biometric authentication solution helps organizations mitigate the risk of one of workforce security’s most crucial concerns: account takeover.
All change for Microsoft. The company has suddenly confirmed a major update “for over 1 billion end users,” as the deletion of passwords for all users becomes real. Your Microsoft password, it warns, “could be easily forgotten or guessed by an attacker,” and it’s now time “to completely remove the password from your account.”
“The password era is ending,” Microsoft warned in December. “Bad actors know it, which is why they’re desperately accelerating password-related attacks while they still can.” With “7,000 attacks on passwords [blocked] per second… almost double from a year ago,” the company is on a mission to “convince a billion users to love passkeys.”
A passkey replaces password and two-factor authentication (2FA) codes with account authentication linked to your hardware devices or devices and secured by the same security that unlocks that device, most likely your fingerprint or your face. Unlike passwords, this means a passkey cannot leak or be stolen as it requires that physical hardware device. And unlike 2FA, it cannot be intercepted or bypassed.
Phase I of the Global Battery Passport System Demonstrated Product Traceability with Privacy-Preserving Capabilities and Selective Data Disclosure
Los Angeles—31 March 2025: MOBI, a nonprofit Web3 consortium of global industry leaders, announces the successful completion of the first year in its three-year initiative to build the Global Battery Passport System (GBPS). This milestone marks a significant step in creating a secure, Self-Sovereign Identity and Data framework for seamless data spaces interoperability and digital product passports. MOBI is actively demonstrating data spaces interoperability with other global consortia such as Gaia-X. While the GBPS initiative focuses on battery data exchange as its use case, the underlying technology is agnostic and can be leveraged to enhance trust, compliance, and operational efficiency for all digital transactions and traceability.
MOBI GBPS initiative is being spearheaded by members of the Circular Economy and the Global Battery Passport (CE-GBP) Working Group, including Anritsu, DENSO, HIOKI, Honda, Mazda, Nissan, Orico, Suzuki, and TradeLog.
During Phase I, which took place from January to December 2024, working group members concentrated on evaluating global battery regulatory requirements and developing use cases aligned with Web3 technology for secure data exchange and management. Critically, implementers successfully launched and ran nodes on the federated Integrated Trust Network (ITN), a member-built and operated registry for World Wide Web Consortium (W3C) Decentralized Identifiers (DIDs). The ITN provides the core trust services of governance, authority, identity, and assurance to enable verifiable transactions between Self-Sovereign Digital Twins (user agents) whose DIDs are anchored in the ITN.
Building on the success of Phase I, MOBI and its partners are now advancing to Phase II. In Phase II, implementers will leverage ITN core services to test transactions within the Citopia Decentralized Marketplace (DM). Anticipated technical achievements in Phase II include Web3 ecosystem onboarding, asset ownership traceability, secure data exchange, and Apps building with Citopia DM.
The GBPS shall operate as a specialized user-controlled application within Citopia DM, which supports secure communication, transactions, and data sharing across industries with zero-knowledge proofs. Citopia DM provides the interoperability and data sovereignty that Web3 ecosystems and value chains require. Through this setup, the GBPS shall offer a privacy-focused solution that protects proprietary information and promotes circularity in line with current and future regulations, while serving as a robust foundation for use-case-specific applications rooted in trusted and verifiable battery data, such as:
management of battery passports, used electric vehicle (EV) pricing, green financing and carbon credits, charging management, and more.While most battery passport solutions focus on public data disclosure, Citopia GBPS emphasizes secure sharing of proprietary data. The GBPS leverages zero trust architecture and selective data disclosure, where only relevant information is shared with intended recipients. This privacy-centric design facilitates regulatory compliance while safeguarding sensitive data, reducing the need for intermediaries and enhancing trust across value chains, creating a Web3 system for data spaces interoperability across borders and stakeholders.
“As current and future regulations develop, solutions like the ITN and Citopia DM are essential for addressing industry and regulatory needs. MOBI is proud to work with our members in developing a solution that is agnostic to existing data spaces along with Web2<>Web3 interoperability to support business objectives in creating a more secure digital future and meeting circular economy goals,” said Tram Vo, CEO and Co-founder of MOBI.
“A global battery passport system has the potential to unlock new levels of traceability, transparency and security in battery-related applications, supporting our efforts to contribute to greener and safer mobility,” said Roger Berg, vice president of North America Research and Development at DENSO. “While that’s an important goal for our team, this solution also represents more than that. It’s a great example of how MOBI and our fellow members are working together to support a more circular economy and sustainable world.”
“The second stage of Phase I has provided further valuable insights to CE-GBP WG partners. In Stage 1, the basic data sharing environment was established. In Stage 2, the first business-related technical capabilities were explored, such as the ownership management of a DID-based asset and the selective disclosure of data between two organizations,” said Christian Köbel of Honda. “These successful learnings have encouraged us to further explore the potential of a self-sovereign data ecosystem to address future challenges of the circular economy.”
“The functionality of DIDs and Verifiable Credentials for reliable data federation among implementing companies has been successfully verified, utilizing the Integrated Trust Network. This endeavor not only demonstrates the potential of these technologies to enhance data federation but also emphasizes the constructive and agile collaboration achieved among implementers within the cooperative framework.” said Yusuke Zushi, Senior Manager at Nissan.
“In Phase 1 of the Global Battery Passport (GBP) system demonstration, we learned that the MOBI GBP can be used to exchange highly confidential information. We want to thank everyone involved in Phase 1,” said implementers at Mazda Motor Corporation.
Said Hisashi Matsumoto of Anritsu, “MOBI’s activities have led to the successful completion of Phase I, moving towards building an ecosystem that enables reliable and secure information exchange. We are grateful to have participated in the verification of use cases with practical operations. We hope that MOBI’s efforts will continue to contribute to a more sustainable circular economy.”
“We are very proud of the great results of Phase I which we have achieved based on our strong relationships with major OEMs, Tier 1 suppliers, and more. It has been an exciting and interesting experience to work with such talented people. We will continue to take on challenges in Phase II in the name of collaboration,” said Alvin Ishiguro, Project Coordination at TradeLog, Inc.
MOBI and its members are committed to driving a future where collaborative, decentralized technologies foster innovation and transparency across industries.
About MOBI
MOBI is a global nonprofit Web3 consortium. We are creating standards for trusted Self-Sovereign Data and Identities (e.g. vehicles, people, businesses, things), verifiable credentials, and data spaces interoperability. Our goal is to make the digital economy more efficient, equitable, decentralized, and circular while preserving data privacy for users and providers alike. For additional information about joining MOBI, please visit www.dlt.mobi.
Media Contact: Grace Pulliam, MOBI Communications Manager
Email: grace@dlt.mobi | Twitter: twitter.com/dltmobi | Linkedin: MOBI
###
The post MOBI Members Lead the Way for Web3 Digital Product Passports and Data Spaces Interoperability first appeared on MOBI | The New Economy of Movement.
This is the fourth in a series of blog posts drawing on insights from a report authored by We Are Open Co-op for the Irish National Digital Leadership Network (NDLN). The report explores the historical roots of credentialing, the emergence of microcredentials, and the opportunities they present for reshaping education and professional development.
Part 1 — Introduction and Context Part 2 — The Evolution of Credentialing Part 3 — Demystifying Microcredentials Part 4 — Trends Shaping the Future of Microcredentials (this post) Part 5 — The Role of Technology in Microcredentialing Part 6 — Challenges and Risks in Microcredentialing Part 7 — A Vision for the Future of MicrocredentialsMicrocredentials are emerging as a significant development in education and professional development, reflecting broader shifts in how learning and skills are recognised. This post examines key trends shaping their future and explores how these developments are driving change across sectors. Our guiding concept in this is Open Recognition which encourages organisations to think more holistically about the knowledge, skills, and behaviours they seek to promote.
Skills-Based HiringOne of the most influential trends is the growing emphasis on hiring based on specific skills rather than traditional qualifications. Employers are increasingly seeking candidates with demonstrable competencies that align with their organisational needs. Microcredentials provide a structured way to validate these skills in a way that is transparent and verifiable.
For organisations, skills-based hiring supported by microcredentials allows for more inclusive recruitment practices, reducing reliance on traditional proxies like degrees. However, to succeed, microcredentials must be recognised as credible and portable, ensuring they meet the expectations of both employers and job seekers. This means they should be built on open standards such as Open Badges.
Image CC BY-ND Visual Thinkery for WAO Stackable Learning PathwaysThe modular nature of microcredentials enables learners to build their qualifications incrementally, creating personalised pathways. This approach is particularly valuable for individuals balancing education with work or other commitments. Stackable credentials also align with the concept of lifelong learning, allowing learners to upskill or reskill as needed throughout their careers.
Aligning microcredentials with national or international qualification frameworks can enhance their value and portability, ensuring that learners can seamlessly combine credentials from different institutions or sectors. This alignment also supports employers and other organisations in recognising and valuing these credentials across diverse contexts. By working in collaboration, stakeholders can create systems that not only validate individual achievements but also enable progression and opportunity, therefore creating a connected and inclusive learning ecosystem.
The Role of Open StandardsThe adoption of open standards, such as Open Badges and Verifiable Credentials, is critical to the future of microcredentials. These standards ensure that credentials are portable, secure, and interoperable across systems. By making use of metadata, they provide detailed information such as the knowledge, skills, and/or behaviours validated, the criteria met, and the issuing organisation. It may also include any endorsements made by individuals and organistions, further enhancing the value of the microcredential.
Open standards also facilitate the integration of microcredentials into digital wallets, meaning individuals can store and share their achievements easily. This approach empowers learners to take ownership of their credentials while ensuring transparency and trust.
Technology and InnovationEmerging technologies are transforming how microcredentials are issued, managed, and verified. For example, Artificial intelligence (AI) is being used to map skills and recommend personalised learning pathways.
However, these technologies must be implemented responsibly. Issues such as data privacy, ethical AI use, and accessibility must be addressed to ensure that technological advancements benefit all learners and do not exacerbate existing inequalities.
Collaboration Across SectorsThe success of microcredentials depends on collaboration between education providers, employers, policymakers, and technology developers. These partnerships can ensure that microcredentials remain relevant and aligned with industry needs while encouraging innovation in how learning is recognised.
For example, microcredentials can be used to::
Charities — Recognise skills gained through volunteer work, fundraising activities, and advocacy campaigns, enabling participants to showcase their contributions effectively. NGOs — Validate the skills of community workers and volunteers, making their contributions visible and valued. Co-ops — Highlight informal learning and collaboration, supporting shared growth and innovation among members. Businesses — Align microcredentials with organisational values, such as sustainability or diversity, to enhance staff development and retention. Higher Education — Partner with businesses to design credentials for emerging fields like green technology or AI ethics, and create opportunities for targeted, stackable learning pathways. Looking AheadThe trends shaping microcredentials reflect a shift towards more flexible, inclusive, and skills-focused education systems. By embracing open standards, leveraging technology, and fostering collaboration, organisations can unlock the full potential of microcredentials to meet the needs of learners and society.
In the next post, we’ll explore the role of technology in greater depth, examining the digital tools and frameworks that underpin modern microcredentialing systems.
Reframing Recognition: Part 4 was originally published in We Are Open Co-op on Medium, where people are continuing the conversation by highlighting and responding to this story.
In this episode, privacy advocate Zach Edwards returns, joined by ISL’s technology director Irene Knapp, to discuss the real-time bidding data broker supply network. Join us as we get geeky looking at two data brokers in the news–Gravy Analytics and Eskimi–and how adtech entities “listen” to the real-time bidstream protocol, to collect and sell personal information, including location data.
The post “Unsafe at Any Click” – Episode 6 appeared first on Internet Safety Labs.
This white paper is part of a three-part series on preventing phishing attacks through passkey deployment:
Part 1: Overview – Introduces the concepts of a passkey journey toward phishing prevention. Part 2: Partial prevention – Details strategies for enforcing passkeys in specific scenarios. Part 3: Full prevention – Explains how to achieve comprehensive phishing resistance.Making your services phishing-resistant takes more than one day because you are not just adopting a new phishing-resistant authentication method. It is a journey with multiple stages where you improve security by strengthening account login and recovery processes. This paper outlines the passkey journey and defines the authentication and recovery requirements for each stage.
AudienceRelying parties and developers who want to protect their applications from phishing attacks by adopting passkeys.
You can read the white papers on Passkey Central or use the following buttons to download PDF versions.
Part 1: OverviewIntroduces the concepts of a passkey
journey toward phishing prevention.
Details strategies for enforcing passkeys
in specific scenarios.
Explains how to achieve comprehensive
phishing resistance.
The post Digital Credentials: Enhancing Trust, Expanding Opportunity, Enabling Mobility appeared first on Velocity.
The post Verifiable Credentials – Bringing Trust and Truth to Talent Acquisition appeared first on Velocity.
Kia ora,
A Farewell Message
This is the final DINZ newsletter under my watch, but if the writing style seems familiar in April’s newsletter you’ll know that AI has been deployed or my successor hasn’t been fully onboarded .
It’s been a blast, it really has! I’m satisfied with DINZ’s development under my tenure. No regrets. DINZ has punched way above its weight with minimal resources – only made possible by larger organisations whose support enabled volunteer members and supporters from organisations of all sizes to dedicate time and expertise to deliver our mahi on its mission. You know who you are, so thank you!
Thanks also to the DINZ Executive Councils I have served over the years. It’s a largely unheralded gig, but governance in emerging ecosystems is crucial.
DINZ Updates
Industry Insights
There are some ‘green shoots’ appearing in Aotearoa New Zealand’s digital identity space, but we have years of catch-up ahead to regain our global leadership. We have to be more nimble, smart and collegial in all dimensions (detailed examples too much for the newsletter’s word count), but suffice to say that it’s Kiwi companies like MATTR, Authsignal, JNCTN, APLYiD, MyMahi and others that keep us on the map globally.
While taking more leisure time, I’m open to some advisory work leveraging my decades of knowledge, experience and contact networks in both local and international settings not often possible in this role.
What’s Happening Around the Globe
As always, I’m sharing links to global news that resonated with me.
Final Thoughts
That’s it! Make sure you do something ‘identiful’ in April or attend the regional virtual catch ups on Identity Management Day and I’ll see you all soon out there in cyber.
Ngā mihi,
Colin Wallis
Executive Director, Digital Identity NZ
Farewells, new beginnings and the constant that is change
Read full news here: Farewells, new beginnings and the constant that is change
SUBSCRIBE FOR MOREThe post Farewells, new beginnings and the constant that is change appeared first on Digital Identity New Zealand.
Beginning in 2025 The Engine Room will be hosting and maintaining the Cybersecurity Assessment Tool (CAT) in our support programming.
The post Re-homing the Cybersecurity Assessment Tool (CAT) appeared first on The Engine Room.
The media release is an official statement by the Swiss Federal Government announcing the strategic direction and implementation steps for digital identity and trust infrastructure.
On 21 January 2025, the Swiss Federal Office of Justice (FOJ), in collaboration with the Federal Chancellery and the Federal Office of Information Technology and Telecommunications (FOITT), published an important statement on the future of digital identity and trust infrastructure.
As DIDAS – the Swiss association for digital trust – we are proud to have actively contributed to this milestone. Through constructive consultation, cross-sector collaboration and the sharing of insights from our community, we helped shape the direction of this foundational work. A special thank you goes to our co-authors and all those who support this joint effort.
Digital trust infrastructure – such as the E-ID, verifiable credentials, and interoperable wallets – is essential for the digital resilience and competitiveness of Switzerland. These are not just technological building blocks, but key enablers of a sovereign, secure and citizen-centric digital society.
This milestone also directly connects with the recently published vision by FIND and the State Secretariat for International Finance (SIF):
👉 Pathway 2035: Unlocking Financial Innovation for Switzerland
The message is clear: Switzerland’s ability to lead in financial innovation depends on a trusted digital foundation. At DIDAS, we are committed to enabling this foundation – bringing together policy, technology, and ecosystem actors to reduce friction, build interoperability, and establish trust by design.
Because in the end, Digital Trust is not a nice-to-have – it is a strategic imperative for Switzerland’s future.
Digital Trust is now firmly positioned on Switzerland’s strategic, political, and economic agenda.
#DigitalTrust #EID #TrustInfrastructure #Switzerland #SIF #FIND #Pathway2035 #FinancialInnovation #DIDAS
Do you ever think about food safety when you sit down for a meal? It’s easy to take for granted, but behind every meal, strict standards and practices ensure the food we consume is safe.
In this replay episode, we revisit our conversation with Darin Detwiler, Founder and CEO of Detwiler Consulting Group. Darin’s path to food safety is deeply personal, driven by the tragic loss of his son to E. coli.
Darin shares how the food safety industry is adapting to technological advancements like data analytics, AI, and digital solutions while meeting the ongoing demand for consistent production. If you've ever wondered about the efforts behind keeping food safe, this episode provides an inside look at the evolving food safety landscape and how we can continue protecting consumers in a rapidly changing environment.
In this episode, you’ll learn:
How digital solutions like data analytics and blockchain balance long-term and short-term food safety goals
The need for courage in food safety leadership to proactively manage and prevent crises
The power of social media to help improve food safety and transparency
Jump into the conversation:
(00:00) Introducing Next Level Supply Chain
(00:45) What led Darin into the food safety industry
(04:05) What Detwiler Consulting Company offers
(09:20) New technology and trends in the food safety industry
(14:08) How Darin and his team use AI and evolve it
(16:39) Big failures that have taken place in the food safety industry
(23:47) Darin’s favorite technology at the moment
Connect with GS1 US:
Our website - www.gs1us.org
Connect with the guest:
Darin Detwiler on LinkedIn
This document is intended to highlight areas where FIDO offers the best value to address U.S. Government use cases as an enhancement of existing infrastructure, while minimizing rework as U.S. Government Agencies advance their zero trust strategies with phishing-resistant authentication tied to enterprise identity as the foundation.
For any questions or comments, please contact feedback@fidoalliance.org.
Note this white paper has been revised – March 2025.
DOWNLOAD THE WHITE PAPERAt ISL we pride ourselves on being dispassionate truth-seekers when it comes to assessing risky behaviors in technology. But this moment in history requires expressing our passion for truth and our core values more openly.
What’s currently happening in the US is, by design, disruptive, disorienting, and discouraging for many. The calculated dehumanizing of trans people and immigrants coupled with the TESCREAL eugenics AI agenda elucidated by Timnit Gebru1 foretells a worrying, broader dehumanization campaign ahead. As does the ongoing systematic removal and censorship of any kind of diversity, trans rights or related language on government sites. These actions inexorably lead to the detention and removal of targeted people, which, sadly, has begun.
This moment calls for pausing, ignoring the external chaos and looking within, to our own guiding north stars and speaking out for the values within us that reflect deep truths. Deep truths such as: love and celebration of diversity being vital for both humankind’s and the planet’s survival; it’s certainly vital for the work that we do every day at ISL.
Diversity, equity, and inclusivity are the way. Any other exclusionary, reductive, oppressive, dehumanizing way is antithetical to the truths of human kinship and interdependence and denies the reality that healthy ecosystems need diversity to sustain and thrive.
ISL exposes safety risks inherent in tech when used as intended. In other words, we’ve been establishing reasonable safety standards for the behavior of software-driven products and services. But how can we contemplate “safety” without the necessary question: “for whom?”
Many people think that ISL is solely focused on the safety of tech used by children. It may surprise you to know that that’s not our mission. It’s the brunt of our currently funded work, but our corporate vision is:
“A world where all digital products are safe for humans and humankind.”
In other words, safety for all.
I take this moment to reaffirm ISL’s commitment to prioritizing and fostering conditions to create a truly diverse, inclusive, and equitable organization.
https://firstmonday.org/ojs/index.php/fm/article/view/13636/11599The post The Tao of Diversity, Equity, and Inclusivity appeared first on Internet Safety Labs.
Recent developments in the UK have once again highlighted the importance of user-controlled encryption for protecting personal data. Due to government demands in the United Kingdom, Apple has recently announced that they have stopped offering end-to-end encrypted iCloud storage, Advanced Data Protection (ADP), to new users, and will disable the feature to all users in the country at an unknown point in the future.
What does this mean for you?
In the short term it means that UK user data is less protected, and potentially vulnerable to misuse or breaches. Without end-to-end encryption (E2EE), data stored in iCloud is accessible to Apple and, by extension, potentially to government agencies and bad actors. In the long term it sets a dangerous precedent that Apple can turn off this crucial security feature for any group of users at any time, raising concerns about digital privacy worldwide.
What can we do about it?
The answer is relatively simple, we need to encrypt our own data.
Relying on a third party to keep our personal data safe — even big names like Apple, Amazon, or Microsoft — will always carry risk. Policies change, and external governmental pressures can force providers to weaken security measures, leaving the user exposed.
By layering End-to-End Encryption (E2EE) on cloud storage services ourselves, we can ensure that our data is protected from unwanted access, even from the storage services themselves (in case they want to mine the content of our data).
And the best part? We can do it for free by using DIDComm!
What is DIDComm?
One powerful and open-source solution for encrypting data before cloud storage is the DIDComm protocol. Hosted by the Decentralized Identity Foundation (DIF), DIDComm provides a secure, private communication methodology built on Decentralized Identifiers (DIDs).
When used to establish an end-to-end encrypted channel, DIDComm ensures that only essential delivery metadata remains in plaintext, while everything else — including the body, attachments, etc — is encrypted for intended recipients only. In practice, this means that we can use the DIDComm message format to encrypt files as they are prepared to be synchronized from a local folder to the cloud storage service and ensures that only the intended recipient (you) can access them.
How to use DIDComm for cloud encryption
Steve McCown, Chief Architect at Anonyome Labs, was the first to create a practical method for encrypting files with DIDComm before storing them into the cloud. His recent GitHub release provides a thorough breakdown of how it all works and offers a step-by-step guide.
For those looking to dive deeper,
Join one of our upcoming DIDComm user group calls Join the discussion on DIF’s Discord in the #didcomm-user-group channelNew figures reveal that over 200,000 users of myGov password stopped using passwords in favour of exclusively using passkeys as their login method by the end of last year.
VicRoads, Victoria’s road transport authority, has implemented a passkeys authentication system as part of its digital security enhancement initiative, marking a significant step in Australia’s broader transition toward advanced digital identity solutions. The new system moves away from traditional password-based authentication methods toward a more secure passwordless approach, following similar changes by major technology providers like Microsoft in recent months.
Fime’s testing laboratories in both EMEA and Taiwan have obtained full accreditation under the FIDO Alliance Identity Verification (IDV) Certification Programme.
This certification allows the company to assess and validate identity verification vendors’ Document Authenticity and Face Verification solutions, contributing to fraud prevention efforts while ensuring compliance with industry standards.
Growing concerns over deepfakes drive standardisationThe introduction of FIDO’s IDV Programme comes in the context of increasing concerns about AI-driven fraud. According to the official press release, despite over 70 billion digital identity verification checks conducted in 2024, more than half of users remain worried about the risks posed by deepfakes and other fraudulent activities. The programme establishes a unified accreditation process to ensure remote identity verification solutions are secure and resistant to manipulation.
A representative from Fime stated that remote identity verification is essential for sectors such as banking and digital ID enrolment, given the rapid advancements in deepfake technology. The official highlighted the importance of FIDO IDV Certification in helping service providers ensure that their vendors deliver reliable, validated solutions capable of protecting users and mitigating risk.
Officials from the FIDO Alliance emphasised that the certification programme is designed to strengthen security during onboarding and enrolment processes. They noted that, alongside biometric component certification, this initiative aims to reduce reliance on traditional passwords while enhancing security and user experience.
87% of companies have, or are in the midst of, rolling out passkeys with goals tied to improved user experience, enhanced security, and compliance, according to the FIDO Alliance.
Key findingsEnterprises understand the value of passkeys for workforce sign-ins. Most decision makers (87%) report deploying passkeys at their companies. Of these, 47% report rolling out a mix of device-bound passkeys (on physical security keys and/or cards) and synced passkeys (synced securely across the user’s devices).
Organizations are prioritizing passkey rollouts to users with access to sensitive data and applications, including the three most commonly cited priority groups: Those requiring access to IP (39%), users with admin accounts (39%), and users at the executive level (34%). Organizations leverage communication, training, and documentation within these deployments to increase adoption.
Passkey deployments are linked to significant security and business benefits. Respondents report moderate to strong positive impacts on user experience (82%), security (90%), help center call reduction (77%), productivity (73%), and digital transformation goals (83%).
Groups that do not have active passkey projects cite complexity (43%), costs (33%), and lack of clarity (29%) about implementation as reasons. This signals a need for increased education for enterprises on rollout strategies to reduce concerns, as there is a correlation between these perceived challenges and the proven benefits of passkeys.
In this episode, Jenna Barron interviews Andrew Shikiar, CEO and executive director of FIDO Alliance. They discuss the state of passkey adoption in the industry today and how organizations can prepare for adopting them.
Key talking points include:
Why passkeys are more secure than passwords How widespread their adoption is Ways organizations can prepare for broader passkey adoptionVisit Passkey Central for more resources on passkeys: https://www.passkeycentral.org/home
Fime has achieved full FIDO Alliance Identity Verification (IDV) Certification Program accreditation across multiple regions. Both the Fime EMEA and Fime Taiwan testing laboratories can now support identity verification vendors in certifying their Document Authenticity and Face Verification solutions, helping combat fraud while enhancing the user experience.
With over 70 billion digital identity verification checks conducted in 2024, a reported 52% of people are still concerned about deepfakes and AI-driven fraud. To address this, FIDO introduced the IDV Program, providing a standardized accreditation that ensures remote digital identity verification solutions are secure, reliable, and fraud resistant.
The apparel industry is working to be more sustainable, but verifying those claims is complicated. With over 70 certifications and no standardized way to share data, brands and retailers struggle to track sustainability efforts efficiently.
In this episode, Amy Reiter, Senior Director of Customer Success for the Apparel and General Merchandise Initiative at GS1 US, joins hosts Reid Jackson and Liz Sertl for a conversation on the challenges of sustainability certifications in apparel. Many companies still rely on PDFs and spreadsheets to verify organic cotton claims and responsible manufacturing. GS1 US is working to change that by improving data-sharing processes and exploring how GTINs (Global Trade Item Numbers) can streamline certification tracking.
Tune in to learn how the industry is tackling sustainability verification and what it means for brands, retailers, and consumers alike.
In this episode, you’ll learn:
Why GTINs are critical for accurate sustainability claims
How brands and retailers can replace inefficient certification tracking
The growing role of machine-readable data in product transparency
Jump into the conversation:
(00:00) Sustainability Journey at GS1 US
(03:16) Streamlining Textile Certification Data
(06:51) GTIN Certification Process Overview
(10:22) Gen Z Drives Eco-Label Awareness
(13:04) Seamless Machine-Readable Data Sharing
(18:28) Sustainability Practices in Business
Connect with GS1 US:
Our website - www.gs1us.org
Connect with the guest:
Amy Reiter on LinkedIn
Major technology companies Microsoft, Google, and Apple are driving widespread adoption of passkeys as an alternative to traditional passwords, leveraging biometric authentication methods like facial recognition and fingerprint scanning for enhanced security and user convenience. The initiative builds on the FIDO Alliance standards that these companies have been developing since 2019.
The initiative, which began with a joint announcement by the three tech giants in 2022, has now reached full implementation across all major platforms. Users can access passkey functionality through their devices’ built-in biometric systems, enabling seamless authentication across various services and applications. Microsoft has recently announced plans to implement passkeys for over one billion users in response to a 200 percent increase in cyberattacks.
Do you think your trusty 8-character password is safe? In the age of AI, that might be wishful thinking. Recent advances in artificial intelligence are giving hackers superpowers to crack and steal account credentials. Researchers have demonstrated that AI can accurately guess passwords just by listening to your keystrokes. By analyzing the sound of typing over Zoom, the system achieved over 90% accuracy in some cases.
And AI-driven password cracking tools can run millions of guess attempts lightning-fast, often defeating weak passwords in minutes. It is no surprise, then, that stolen or weak passwords contribute to about 80% of breaches.
The old password model has outlived its usefulness. As cyber threats get smarter, it is time for consumers to do the same.
Google has announced plans to phase out SMS-based authentication for Gmail accounts in favor of more secure methods like QR code verification and passkeys. The change follows similar moves by other tech giants like Microsoft and Apple to strengthen authentication methods as part of the company’s broader security enhancement initiatives.
A new report from the FIDO Alliance aims to understand the state of passkey deployments by enterprises in the U.S. and UK, including methods for deploying FIDO passkeys, total employees enrolled and perceived barriers to deployment.
Based on a survey of 400 IT professionals (200 from each country), the report says passkey adoption for employee sign-ins is a high or critical priority for two thirds of respondents, and that the majority of enterprises have “either deployed or are in the midst of deploying passkeys with goals tied to improved user experience, enhanced security and standards/regulatory compliance.”
The FIDO Alliance along with underwriters Axiad, HID, and Thales today released its State of Passkey Deployment in the Enterprise report, finding that 87% of surveyed companies have, or are in the midst of, rolling out passkeys with goals tied to improved user experience, enhanced security, and compliance.
Today, artificial intelligence (AI) is rapidly reshaping the Internet, driving a historic transformation in how we engage, work, and communicate online.
However, the rise of generative AI has also led to an explosion of deepfakes, hallucinating language models, and the rapid creation of untrustworthy content — threatening the foundation of authentic communication and learning. AI-generated content now dominates the internet, making it increasingly difficult to distinguish reality from fabrication.
While AI unlocks significant advancements, it also introduces equally substantial risks, from intellectual property infringements to illegal content, such as child sexual abuse materials.
It is for this reason, we founded umanitek.
At umanitek, our mission is to fight against harmful content and the risks of AI by promoting technology that serves the greater good of humanity.
Our founders, Trace Labs, Ethical Capital Partners and AMYP Ventures AG (part of a Piëch/Porsche Family Office) bring together their capabilities in building reliable and trusted AI systems, their connection to networks that fight for the removal of internet harm, and their ability to raise awareness of the importance of knowledge and education in the age of AI.
But this is too big of a challenge to go at it alone. Recognizing the magnitude of this issue, we actively seek partnerships with institutions and individuals dedicated to ethical AI development. We want to partner with investors who are focused on “tech for good” solutions where societal impact is of equal importance to commercial success and to work with tech leaders, policymakers, and law enforcement to make internet safety the standard in the age of AI.
Balancing innovation with responsibility in the age of AI.Our vision is to leverage umanitek’s technology to enable corporations and individuals to control their data, technology, and resources without compromising security, privacy, or intellectual property.
Here’s but one quick example of how umanitek will work.
Far too many people are concerned about the non-consensual sharing of their personal images or those of their children. Umanitek will enable companies, law enforcement, NGOs, and individuals to upload “fingerprints” of personal photos to a decentralized directory. This system will help large technology platforms identify and prevent the distribution of such content.
In the potential next step, it also significantly streamlines the prosecution of offenders by collaborating with law enforcement while reducing the cost and complexity of legal action related to copyright infringements.
When organizations and individuals can choose what to share and how to share it in a secure and verifiable way, all internet users benefit. Protecting legitimate content and preventing large language models from training on non-consensual data are integral to harm reduction online. We believe this is an important step to making internet safety the standard in the age of AI, reducing harmful content, and enabling trusted AI solutions.
Fighting the good fight.
“We invested in OriginTrail to drive transparency and trust for real-world assets. Now, we’ve co-founded umanitek to combat harmful content, IP infringements, and fake news — leveraging OriginTrail technology across internet platforms.”
— Chris Rynning, AMYP Ventures AG (part of a Piëch/Porsche Family Office)
An unprecedented alliance for ethical AI.Umanitek stands out by combining the expertise of three leaders in their fields:
Trace Labs (core developers of OriginTrail) — The pioneers of neuro-symbolic AI, building trusted and verifiable AI systems. They are the developers behind the OriginTrail Decentralized Knowledge Graph (DKG), a technology that enhances trust in AI, supply chains, and global data ecosystems.
Ethical Capital Partners (ECP) — A private equity firm seeking out investment and advisory opportunities in industries that require principled ethical leadership. Founded in 2022 by a multi-disciplinary team with legal, regulatory, law enforcement, public engagement, and finance experience, ECP’s philosophy is rooted in identifying companies amenable to a responsible investment approach and working collaboratively with management teams in order to develop strategies to create value and drive growth.
AMYP Ventures AG (part of a Piëch/Porsche Family Office) — A venture capital group backing game-changing AI and Web3 initiatives with the potential for global impact.
This is a collaboration that combines the knowledge of AI, cutting-edge research, and technology with ethical investment strategies to create the standard for internet safety in the age of AI — an AI solution that will serve humanity.
Subscribe for updates at umanitek.ai to stay in touch and be among the first to learn about cofounders, contributors, and partners of umanitek, as well as reserve a spot to test-drive umanitek’s products at their release.
UMANITEK: Setting the standard for internet safety was originally published in OriginTrail on Medium, where people are continuing the conversation by highlighting and responding to this story.
March 2025
DIF Website | DIF Mailing Lists | Meeting Recording Archive
Table of contents Decentralized Identity Foundation News; 2. Working Group Updates; 3 Special Interest Group Updates; 4 User Group Updates; 5. Announcements; 6. Community Events; 7. DIF Member Spotlights; 8. Get involved! Join DIF 🚀 Decentralized Identity Foundation News Creator Assertions Working Group Joins DIFDIF welcomes the Creator Assertions Working Group (CAWG) as its newest working group! CAWG builds upon C2PA's work by defining additional assertions allowing content creators to express intent about their content and bind their online identity to what they produce. This collaboration with ToIP will be chaired by Eric Scouten from Adobe, with meetings beginning March 10.
To participate, join DIF!
Welcoming Creator Assertions Working Group to DIF DIF is excited to welcome the Creator Assertions Working Group (CAWG) as a new working group of DIF. About CAWG CAWG builds upon the work of the Coalition for Content Provenance and Authenticity (C2PA) by defining additional assertions that allow content creators to express individual and organizational intent about their Decentralized Identity Foundation - BlogWorking Groups DIDComm User Group Expands!The DIDComm User Group offers developers an open forum to learn about and implement DIDComm protocol. Colton Wolkins highlights three compelling reasons to join:
Seeing real-world demonstrations from implementers Getting help with technical challenges Learning about DIDComm adoption across industries and regions.The group meets regularly to accommodate global participants.
3 reasons why you should join the DIDComm User Group Meeting Guest Blog By Colton Wolkins The DIDComm User Group Meeting is open to anyone interested in using and implementing DIDComm. This is a great place for developers to learn, ask questions, and share experiences! DIDComm (Decentralized Identity Communication) is rapidly becoming a fundamental protocol for secure, private, and interoperable messaging Decentralized Identity Foundation - BlogWorking Groups Cryptographic Pseudonyms: A Short HistoryFollowing IETF/IRTF's adoption of BBS Blind Signatures and BBS per-Verifier Linkability specifications, this comprehensive piece by Greg Bernstein, Dave Longley, Manu Sporny, and Kim Hamilton Duffy explores the evolution of cryptographic pseudonyms and their privacy features. The article examines how BBS signatures provide protections against credential fraud.
Cryptographic Pseudonyms: A Short History Guest blog by Greg Bernstein, Dave Longley, Manu Sporny, and Kim Hamilton Duffy Following the IETF/IRTF Crypto Forum Research Group’s adoption of the BBS Blind Signatures and BBS per Verifier Linkability (“BBS Pseudonym”) specifications, this blog describes historical context and details of cryptographic pseudonyms, as well as the Decentralized Identity Foundation - BlogWorking Groups DIF Hackathon 2024 Winners Wrap UpThe DIF Hackathon 2024 showcased remarkable innovation across multiple tracks including education and workforce solutions, reusable identity, and privacy-preserving authentication. Standout projects included ZKP implementations, verifiable credentials powering next-gen job boards, seamless hotel check-ins, and digital identity solutions for expats. See the full list of winners and their groundbreaking projects.
🚀 Celebrating Innovation: Winners of the DIF Hackathon 2024 The DIF Hackathon 2024 brought together builders from around the world to tackle some of the biggest challenges in decentralized identity. Across multiple tracks—including education and workforce solutions, reusable identity, and privacy-preserving authentication—participants developed creative applications that redefine how digital identity is used and trusted in the real Decentralized Identity Foundation - BlogLimari Navarrete 🛠️ Working Group Updates DID Methods Working GroupThe group discussed categorization of DID methods, blockchain-based DID methods, the need for standardization in web-based methods and decentralized methods for government use cases, and the proposal for a new charter for the W3C Working Group. The group agreed on the need for further refinement and discussion before a vote could be held.
DID Methods meets bi-weekly at 9am PT/ noon ET/ 6pm CET Wednesdays
Identifiers and Discovery Working GroupThe Identifiers & Discovery group discussed various work items, including Linked VPs and DID:webvh method, as well as open source code projects like the Universal Resolver. The group also explored the potential of biometric technology to create a private key directly from a person's face without storing any biometric data or involving a server, and considered the possibility of generating DIDs from biometric data.
Identifiers and Discovery meets bi-weekly at 11am PT/ 2pm ET/ 8pm CET Mondays
🪪 Claims & Credentials Working GroupDIF recently hosted a special Credential Schemas workshop focused on privacy-preserving age verification solutions. Led by Otto Mora (Privado ID) and Valerio Camiani (Crossmint), participants explored innovative approaches beyond traditional verification methods, including AI-based age estimation while maintaining strong privacy protections.
DIF Workshop Highlights Progress on Privacy-Preserving Age Verification Standards DIF recently hosted a special session of its Credential Schemas workshop focused on developing privacy-preserving solutions for age verification. Led by Otto Mora, Standards Architect at Privado ID, and Valerio Camiani, Software Engineer at Crossmint, the session explored the growing need for standardized age verification. The workshop addressed the increasing Decentralized Identity Foundation - BlogWorking GroupsCredential Schemas work item meets bi-weekly at 10am PT/ 1pm ET/ 7pm CET Tuesdays
Applied Crypto Working GroupThe general Applied Crypto Working Group has finished a draft for a general trust model for ZKP self-attestations, and working through feedback. It's a great time to get involved.
The Crypto BBS+ Work Item group is addressing feedback from the CFRG Crypto Panel review.
BBS+ work item meets weekly at 11am PT/ 2pm ET/ 8pm CET Mondays
Applied Crypto Working Group meets bi-weekly at 7am PT/ 10am ET/ 4pm CET Thursdays
The DIF Labs Show and Tell event was held on February 18, featuring three groundbreaking projects from the DIF Labs Beta Cohort. After three months of development and refinement, these projects demonstrated cutting-edge innovation in decentralized identity with real-world applications.
The featured projects included:
Ordinals Plus: Brian Richter presented a framework for implementing verifiable credentials on Bitcoin using Ordinal inscriptions. Linked Claims: Golda Velez, Agnes Koinange, and Phil Long demonstrated their system that combines attestations to build progressive trust. VerAnon: Alex Hache showcased a protocol for anonymous personhood verification using Semaphore and zero-knowledge proofs.The event provided attendees the opportunity to engage directly with project creators, offer feedback, and network with industry leaders driving the future of identity. As the Beta program concludes, DIF Labs is now preparing for its next cohort and invites builders, startups, and innovators passionate about decentralized identity to get involved.
DIF Labs meets on the 3rd Tuesday of the month at 8am PT/ 11am ET/ 5pm CET
DIDComm Working GroupThe DIDComm WG is discussing the Trust Spanning Protocol (TSP) and its potential integration with DIDComm to leverage the best of both protocols.
DIDComm Working Group meets the first Monday of each month noon PT/ 3pm ET/ 9pm CET
If you are interested in participating in any of the Working Groups highlighted above, or any of DIF's other Working Groups, please click join DIF.
🌎 DIF Special Interest Group UpdatesThe H&T group featured presentations from The Camino Network Foundation, Customer Futures, and Indicio. The group is discussing a Glossary project, as well as formation of a DIF working group for specification development.
Meetings take place weekly on Thursdays at 10am EST. Click here for more details
DIF China SIGClick here for more details
APAC/ASEAN Discussion GroupThe group discussed progress of their healthcare project, the development of a platform for verifying yellow fever vaccination cards, and the introduction of a new system for verifying the legitimacy and identity of businesses and individuals. They discussed the concept of a foundational identity and the importance of government involvement.
This group is seeking more participants in their calls, so please join!
The DIF APAC call takes place Monthly on the 4th Thursday of the month. Please see the DIF calendar for updated timing.
DIF Africa SIGThe DIF Africa SIG discussed DID:UNCONF Africa and plan to aggregate and publish the Book of Proceedings from the DID:UNCONF sessions on the event website.
Meetings take place Monthly on the 3rd Wednesday at 1pm SAST. Click here for more details
DIF Japan SIGMeetings take place on the last Friday of each month 8am JST. Click here for more details
📖 DIF User Group UpdatesThere are two meeting series to accommodate different time zones, each taking place every Monday except the first week of the month (which is reserved for DIDComm Working Group). Click here for more details.
Veramo User GroupMeetings take place weekly on Thursdays, alternating between Noon EST / 18.00 CET and 09.00 EST / 15.00 CET. Click here for more details
📢 Announcements at DIFConference season is kicking into high gear. Explore our Events calendar to meet the DIF community at leading Decentralized Identity, Identity, and Decentralized Web events.
🗓️ ️DIF Members Dr. Carsten Stöcker Appointed DIF AmbassadorDIF has appointed Dr. Carsten Stöcker, founder and CEO of Spherity GmbH, as DIF Ambassador. Dr. Stöcker brings extensive experience implementing decentralized identity across industrial ecosystems, with expertise in Verifiable Digital Product Passports for regulated industries and bridging European Digital Identity initiatives with Industry 4.0 applications.
Announcing Dr. Carsten Stöcker as DIF Ambassador Announcing Dr. Carsten Stöcker as DIF Ambassador Decentralized Identity Foundation - BlogFoundation👉Are you a DIF member with news to share? Email us at communication@identity.foundation with details.
🆔 Join DIF!If you would like to get in touch with us or become a member of the DIF community:
Join DIF: https://identity.foundation/join/ Visit our website to learn more Follow our channels:Follow us on Twitter/X
Join us on GitHub
Subscribe on YouTube
🔍
Read the DIF blog
New Member OrientationsIf you are new to DIF join us for our upcoming new member orientations. Find more information on DIF’s slack or contact us at community@identity.foundation if you need more information.
Our Executive Director, Lisa LeVasseur, gave a presentation at this year’s Global Shield Online Conference on “Measuring Unavoidable Risks in Technology”. The entirety of the presentation can be viewed in the video below:
The post Global Shield Online Conference: Measuring Unavoidable Risks in Technology [VIDEO] appeared first on Internet Safety Labs.
DIF is excited to welcome the Creator Assertions Working Group (CAWG) as a new working group of DIF.
About CAWGCAWG builds upon the work of the Coalition for Content Provenance and Authenticity (C2PA) by defining additional assertions that allow content creators to express individual and organizational intent about their content. CAWG defines an identity assertion that allows content creators to bind their online identity to the content that the content that they produce.
A Collaboration with ToIP and CAWGThis new working group will be a joint collaboration with members of DIF, Trust Over IP Foundation (ToIP), and the existing members of CAWG welcomed to participate. And of course, new members are welcome to join.
This group will be chaired by Eric Scouten, Identity Standards Architect at Adobe, and additional co-chairs to be named soon.
Get InvolvedCAWG meetings will be held every other week starting Monday, 10 March at the following times:
Americas / European Time Zones: 8am PST / 1500 UTC APAC Time Zones: 6pm PST / 0100 Tuesday UTCTo get involved, see https://cawg.io/
The International Energy Agency recently released two reports: its annual Electricity 2025 analysis and a discussion of current challenges in grid investment. These documents describe two key aspects of the global energy transition: global demand for electricity is increasing, and new renewables (including distributed energy resources) are expected to be the greatest contributor to new supply that meets it. If you dig a bit deeper, you can also see why the electricity system needs a Digital Spine, Energy Web’s solution to optimize grid flexibility and unlock the potential of DERs.
The energy transition continuesThe IEA forecasts “soaring” electricity demand over the next three years. Most of this increase will occur in developing economies, but advanced economies are expected to see increasing demand as well, driven by electric vehicle adoption, new air conditioners, heat pumps, and (especially in the U.S.) new data centers.
This new demand will be met by new low-carbon generation. The IEA expects fully half of the new demand to be met by solar PV, the cost of which continues to decline.
But progress increases complexityIt is undoubtedly excellent news that low-carbon generation will meet new demand, but the IEA cautions that power systems need to further adapt to the changing fuel mix on our grids. A combination of traditional thermal plants that are relatively inflexible, new renewable generation that is variable, and new demand patterns (e.g., more air conditioning demand on hot days and more electric vehicle charging on workday evenings) result in higher volatility in power markets.
We already see these impacts. The Electricity 2025 report describes the increasing incidence of negative prices in wholesale electricity markets, which occurs when supply outsizes demand so significantly that market-clearing prices fall below zero. This can happen for any number of reasons, but as the energy transition continues we would expect to see it happen more frequently in areas where afternoon solar generation could far exceed air conditioning demand (think massive amounts of rooftop and utility-scale solar) or overnight wind generation is greater than nearby transmission capacity.
Negative pricing events are most common in the regions of the world that are furthest along the energy transition, including Australia, California, and Texas (three places with very high solar PV or Wind penetration).
Percentage of negative hourly wholesale electricity prices in selected regions, 2019–2024 (IEA)Negative price events are not in themselves a problem; however, they are noteworthy because they suggest real challenges in power system operations, which will only increase as the energy transition continues.
Physical infrastructure improvements will be insufficientOne strategy to address these challenges is to increase investments in electric grids. This means increasing transmission capacity and upgrading other physical infrastructure in order to reduce constraints that contribute to negative pricing: if one location has more power supply than demand, simply ship the excess to a location that can use it (yes, we recognize that this simplified description glosses over the complexities of constrained dispatch, but it remains true that increasing capacity in an appropriately-planned way will reduce the impacts of system constraints).
Unfortunately, the IEA does not think such new investments will be sufficient, or at least not sufficiently fast:
Around 1.5 million kilometres of new transmission lines have been built worldwide over the last decade, but inadequate transmission remains major constraint on power system development, electrification and energy security. Among other issues, grid infrastructure has struggled to keep pace with the rate at which new renewable sources are entering the system.
Their report describes several reasons for this, including supply chain bottlenecks and permitting delays — problems that are unlikely to recede in an era of increasing trade protectionism.
Managing power systems requires new ways to provide flexibilitySo if new transmission is insufficient, what is the solution? The IEA offers its perspective: “Optimising the use of existing grid infrastructure through digital technologies enhances efficiency and maximises the use of existing assets, providing a safety valve for networks and supply chains”
This is precisely what our Digital Spine solution does: facilitate integration of DERs to allow grid operators and market participants to efficiently, fairly, and securely operate the grid. It can wring cost savings out of existing operational processes or even allow new market configurations that until now have not been feasible.
As an example of the latter, we implemented our Digital Spine solution in Australia as part of the Australian Energy Market Operator (AEMO)’s Project Edge. In that project, AEMO and market participants demonstrated a new approach to integrating DERs into both transmission- and distribution-level operations: a two-sided market based on dynamic operating envelopes. An independent cost-benefit analysis found that implementing this approach could save consumers AU$5–6 billion over twenty years, and that the savings would accrue to both consumers with DERs and those without them. A separate independent technology and cybersecurity assessment found that the Digital Spine as the underlying data exchange was more scalable, stable, resilient, flexible, and secure than traditional point-to-point or centralized approaches.
If you or your organization are looking to increase your network’s efficiency and unlock the full potential of DERs, please see our Digital Spine webpage for more information or to contact us.
Why the Grid Needs a Digital Spine: Insights from the International Energy Agency was originally published in Energy Web on Medium, where people are continuing the conversation by highlighting and responding to this story.
In 2024, Internet Safety Labs (ISL) added 3rd parties observed in app network traffic to our app Safety Labels viewable in AppMicroscope.org. Recently, we reviewed the network traffic of the original 1541 apps, looking for data brokers and the results were clear: 16% apps that were recommended or required in schools were sending student data to registered data brokers. Every state (and District of Columbia) had at least three or more schools with apps communicating with data brokers. In total, 442 of the 663 (or 67%) studied schools had apps with data broker traffic.
Importantly, “registered” data brokers don’t count apps sending data to platforms destined for data brokers, nor does it include entities that should be registered data brokers but aren’t.
This report details our findings and analysis on Edtech apps observed communicating with data brokers, as well as our recommendations for educators and app developers.
1.1 The Inadequacy of “Data Broker” Legal DefinitionsThis analysis counts only registered data brokers found in either the California Data Broker Registry or the Vermont Data Broker Registry.1 Readers should be aware that the legal definition of “data broker” in the US fails to properly account for and hold responsible the full data supply chain feeding data brokers. Specifically, it fails to include:
First parties who sell personal information, such as mobile carriers who were found as recently as last year to be selling some of the most sensitive of personal information, location data2. Entities who sell or share personal in bulk for marketing and advertising purposes, including identity resolution platforms (IDRPs) and customer data platforms (CDPs), both designed to ingest and synthesize personal data from a multitude of services/platformsThus, we must assume that the volume of student data making its way into data brokers is substantially larger than this analysis conveys.
1.2 MethodologyIn 2022, ISL conducted a privacy audit on recommended and required technologies for students in a representative sample of K-12 schools across the US. In total, ISL examined 1541 mobile apps, including analysis of the network traffic between the app, the first party, and all third-party servers.
Next, ISL researchers determined the corporate owner of every subdomain that appeared in the network traffic collected for the apps.
Finally, we determined if the corporate owner was a registered data broker by matching against companies in the California and Vermont data broker registries; note that this reflects data broker registries as of 2024.
2. Findings243 apps or 15.8% of tested apps sent data to registered data brokers. The 243 apps sending data to data brokers communicated with a shocking 6.7 data brokers on average. This means that when children use these apps, their information will be sent to several data brokers.
The top app categories sending data to data brokers were:
News apps (77% of audited news apps) Reference apps (37%) Sports apps (32%), and Community Engagement Platform (CEP) apps (26%).News, reference and sports apps are not surprising; news apps are known to be rife with adtech and martech. See Figure 7 for all category counts.
As noted in all three previously published findings reports, Community Engagement Platform apps were among the leakiest apps observed. The CEP developers with apps found to be communicating with data brokers are shown in Table 1.
Table 1: CEP app developers with app accounts CEP App Developer total # of apps # apps with data broker traffic % with data brokers Apptegy 129 16 12% Filament Essential Services 5 1 20% Finalsite 122 42 34% Focus School Software 6 1 17% From NowOn 4 1 25% Heather Hanks 2 2 100% Intrado Corporation 42 8 19% Mascot Media 10 2 20% SchooInfoApp 14 2 14% SchoolPointe 8 2 25% Straxis 2 1 50%
Of the larger CEP developers (Apptegy, Finalsite, and Intrado), it’s clear that the apps’ configurability is influencing the presence of data brokers since not all of the apps were found to be communicating with data brokers. Finalsite, for example, provides an administrative dashboard that allows school districts to edit the URLs opened by the app. In 2021, ISL spoke with Blackboard (Anthology), then-owner of the Finalsite apps, and learned that the platform performed no checking on the domains entered by school administrators. We suggested that they add guardrails, checking for things like dangling or malicious domains, at a bare minimum. Finalsite acquired Anthology from Blackboard in September 2022.
The Palm Beach County School District Android app (a CEP app) by Intrado included the most data brokers, at a whopping 31. (See also the Safety Label for the app here: https://appmicroscope.org/app/1579/) The app is no longer available on the Google Play store.
Table 2 shows the five apps with the most data brokers from 2022 and from a recent retesting. Two of the apps have been removed from the store, but the other three are the same or worse with respect to the number of data brokers.
Table 2: Top Five Apps – Most Data Brokers App Name Developer # of Data Brokers (2022) # of Data Brokers (2025) Palm Beach County School District (Android) Intrado Corp. 31 App removed from store SBLive Sports (Android) SB Live Sports 27 28 AllSides – Balanced News (iOS) AllSides 27 28 Montgomery Public Schools (Android) Finalsite 27 App removed from store Westover Christian Academy (Android) Apptegy 25 34As discussed in Findings Report 1, the majority of apps from the benchmark weren’t strictly edtech apps; the benchmark included a surprising number of non-edtech, general use apps. Isolating edtech categories, we find that only 6 apps (2.0%) of the strictly edtech apps had observed traffic to data brokers. While this is substantially better than the overall sample rate, for these kinds of services, there should be no data brokers receiving data from the apps.
Table 3: Edtech apps with data broker traffic Classroom Messaging Software
The following are the EdTech apps communicating with data brokers:
Classroom Messaging Software apps: FAMILIES | TalkingPoints (iOS) School Management Software apps: Choicelunch (Android) Choicelunch (iOS) WebMenus by ISITE Software (Android) Student Information System k12 (Android) Virtual Classroom Software ZOOM Cloud Meetings (Android) 2.3 Most Common Data BrokersThe three most frequently observed data brokers in the network traffic were PubMatic, LiveRamp, and Magnite (Table 4).
Table 4: Top 25 Data Brokers Found in Network Traffic Data Broker # Apps in K12 Benchmark PubMatic 110 LiveRamp 100 Magnite 98 Lotame 78 OpenX 78 Freewheel 76 Taboola 72 Oracle 71 Nielsen Marketing 69 Tapad 65 LiveIntent 59 ID5 58 Neustar 57 PulsePoint 50 Outbrain 45 StackAdapt 45 Merkle Marketing 42 Media.net 41 Intent IQ 38 33Across 29 Wunderkind 29 BounceX 24 GumGum 24 Zeta Global 22 Bombora 21Data brokers were found in apps in every state and the District of Columbia. That is, every state sample of 13 schools had at least one school with at least one app with data broker traffic. Figure 1 shows how many schools from the 2022 benchmark had apps that were sending traffic to data brokers. 13 schools were sampled in each state, so the heatmap reflects up to 100% (i.e. all 13 schools) having apps with traffic to data brokers. Texas, Wisconsin and Louisiana each had apps with data brokers in all 13 studied schools.
Figure 1: Number of schools in state sample with at least one app with data broker traffic (13 schools max per state)
Figure 2 shows the total number of apps with data broker traffic for each state sample of 13 schools. The states with the most apps with data broker traffic were Maryland, Kansas, and Minnesota.
Figure 2: Total number of apps with data broker traffic
We hypothesize that the likelihood of apps with data broker traffic is mainly related to the sampled schools’ propensity to recommend a higher number of technologies to students. The correlation between the number apps with data brokers and the total number of apps was moderately strong at .69. Figure 3, the heatmap showing the total number of apps recommended by the 13 sampled schools in each state, indeed shows similarities (Texas, Minnesota, Wisconsin and Maryland, in particular).
Figure 3: Total number of apps per state
We were interested to see if there was any obvious correlation between state privacy laws and the number of data brokers observed. Figure 4 shows states with student data privacy laws. Indeed, three of the nine states that don’t have student data laws, Minnesota, Wisconsin, and Maryland, each had high numbers of apps with data broker traffic, and all 13 schools in Wisconsin had apps with data broker traffic. While inconclusive with respect to causation, the correlation warrants future study. It’s also possible that the absence of a state student data privacy law encourages a higher number of technologies being recommended to students in schools.
Figure 4: States with student data privacy laws
https://studentprivacycompass.org/state-laws/
There was no obvious correlation between states with children’s privacy laws and the number of apps with data broker traffic (Figure 5).
Figure 5: States with children’s privacy laws
https://www.huschblackwell.com/2024-state-childrens-privacy-law-tracker
Figure 6: Updated app safety label https://appmicroscope.org/app/1579/
Figure 7: Apps with data broker traffic by app category
Table 3: Apps with Data Broker Traffic by State State # of schools in state with at least one app with data broker traffic (13 schools sampled per state) % of schools in state using at least one app with data broker traffic total # apps with data broker traffic total # unique apps State children’s privacy law? State Student Data Privacy Law? Alabama 10 77% 22 15 Alaska 3 23% 5 5 Arizona 6 46% 8 7 Y Arkansas 11 85% 21 13 Y California 9 69% 14 12 Y Y Colorado 5 38% 7 5 Y Y Connecticut 12 92% 30 17 Y Y Delaware 11 85% 28 13 Y Washington, D.C. 9 69% 18 6 Y Florida 9 69% 36 23 Y Y Georgia 12 92% 30 16 Y Hawaii 4 31% 6 5 Y Idaho 7 54% 10 6 Y Illinois 9 69% 24 14 Indiana 9 69% 25 13 P Y Iowa 5 38% 18 18 Y Kansas 12 92% 39 24 Y Kentucky 8 62% 21 16 Y Louisiana 13 100% 27 9 Y Maine 11 85% 26 14 Y Maryland 11 85% 47 28 Y Massachusetts 11 85% 27 10 Y Michigan 9 69% 21 11 Y Minnesota 9 69% 39 23 Mississippi 10 77% 24 17 Y Missouri 8 62% 34 23 Y Montana 10 77% 17 12 Y Nebraska 9 69% 20 17 Y Nevada 4 31% 7 6 Y New Hampshire 11 85% 18 5 Y New Jersey 9 69% 17 11 New Mexico 3 23% 3 3 Y New York 6 46% 12 6 Y Y North Carolina 7 54% 9 4 Y North Dakota 8 62% 24 19 Ohio 7 54% 14 10 Y Oklahoma 10 77% 26 12 Y Oregon 5 38% 6 3 Pennsylvania 8 62% 11 8 Y Rhode Island 12 92% 29 10 Y South Carolina 10 77% 15 4 Y South Dakota 8 62% 23 17 Y Tennessee 11 85% 28 16 P Y Texas 13 100% 34 15 Y Utah 7 54% 9 5 Y Y Vermont 11 85% 15 6 Y Virginia 8 62% 18 13 Y Y Washington 3 23% 4 4 Y West Virginia 9 69% 18 14 Y Wisconsin 13 100% 36 15 Wyoming 7 54% 9 5 Y
Footnotes: ISL is updating the database with both Texas and Oregon data broker registries. https://www.fcc.gov/document/fcc-fines-largest-wireless-carriers-sharing-location-data. https://www.wired.com/story/gravy-location-data-app-leak-rtb/.
The post Data Broker Presence in 2022 US K-12 Benchmark Apps appeared first on Internet Safety Labs.
The post Jim Owens Re-Elected as Chairman of the Board appeared first on Velocity.
Kia ora,
2025 is well underway with a healthy mix of activity, excitement and opportunity, Digital Trust included.
We are seeing a major shift in this space as acceptance networks drive up digital identity adoption at the expense of ‘great in theory, hard to implement’ ideas like SSI (Self-Sovereign Identity), or the (hotly debated) need for digital identity itself.
Member News
The final list of participants in Australia’s Age Assurance trial were recently announced and it’s great to see Digital Identity NZ members General Identity Protocol, GBG and MyMahi amongst them!
It’s terrific to welcome CentrifAI Ltd and welcome back Westpac, making it a ‘big four’ clean sweep, plus the Co-operative Bank (the only NZ owned bank member). See all member organisations here.
DINZ has available one Corporate seat and one further seat on the Executive Council to help with our diversity, equity and inclusion policy. If your organisation is not a member, join and lead.
Free access to InformDI’s 7 Chapters of DISTF, sponsored by NEC and DINZ, ends on 31 March. Register now and join the 100+ that have already taken the course. Want to sponsor? Get in touch.
What’s Happening Around the Globe
Australia: Banks are driving easy digital identity adoption across the ditch. UK: Government has announced plans for its own digital wallet to mainly hold mDLs (Mobile Driver’s License) and similar government held official data, according to industry comment. NZ: Global media pick up NZ news too (hat tip to Biometric Update and Inidsol UK).
DINZ Updates
A Personal Note
As members already know (and later from social media), after three and a half years as DINZ’s longest serving Executive Director, I’m stepping down in the next couple of months to move towards my original intention when returning from Europe – more downtime alongside some digital identity advisory that uses my 20+ years of knowledge and experience in this space. As my time as Executive Director comes to an end, DINZ Executive Council welcomes your suggestions on the following by getting in touch.
1. What Digital Trust related project ideas you have in mind to lead this year so that it can be considered in DINZ’s plans. Sponsor that work in DINZ or sponsor projects remaining from 2024.
2. Are you (or do you know of anyone who would be) interested in the role of Executive Director of Digital Identity NZ? Get in touch.
Upcoming Events
DIA Training Schedule Confirmed
Identification management plays a core role in our work, and members should have a foundational level of understanding about what it is and how it impacts your customers. Good identification management helps reduce and/or prevent fraud, loss of privacy and identity theft, by applying good practices and processes.
Topics covered in DIA Training Courses:
FREE online learning at your own pace. Learn more.
For any enquiries relating to Identification Management training email identity@dia.govt.nz
Ngā mihi,
Colin Wallis
Executive Director, Digital Identity NZ
Member news, global trends, upcoming events and more.
Read full news here: More momentum, greater choice – Digital Trust is getting real
The post More momentum, greater choice – Digital Trust is getting real appeared first on Digital Identity New Zealand.
A Snapshot of Passkey Deployments for Employee Sign-ins in the U.S. and UK
Key findings Enterprises understand the value of passkeys and the majority are rolling out passkeys for workforce sign-ins: The majority have either deployed or are in the midst of deploying passkeys with goals tied to improved user experience, enhanced security, and standards/regulatory compliance. Those that are deploying are rolling out a mix of device-bound and synced passkeys. Enterprises are prioritizing passkey rollouts to users with access to sensitive data and applications, and are leveraging communication, training and documentation to increase adoption. Enterprises are reporting significant security and business benefits after rolling out passkeys: they report positive impacts on user experience, security, cost reduction, productivity and digital transformation goals — and are seeing declines in usage of legacy authentication methods. Interestingly, these benefits directly correlate with what businesses who aren’t yet using passkeys dislike most about their current authentication methods: that they can be compromised, are costly, and difficult to use. Organizations that do not have active passkey projects cite complexity, costs and overall lack of clarity about implementation as reasons, signaling a need for increased education to enterprises on rollout strategies to reduce concerns. Read the Full Report Read the Press ReleaseRegistrants can watch the webinar on demand.
Watch On DemandRespondents report positive impacts on user experience, security, productivity, and cost reduction from deploying a mix of device-bound and synced passkeys
February 26, 2025 — The FIDO Alliance along with underwriters Axiad, HID, and Thales today released its State of Passkey Deployment in the Enterprise report, finding that 87% of surveyed companies have, or are in the midst of, rolling out passkeys with goals tied to improved user experience, enhanced security, and compliance.
The report is the result of an independent survey commissioned in September 2024 by the FIDO Alliance Enterprise Deployment Working Group, with underwriting support from Axiad, HID, and Thales, to understand the state of passkey deployments in the U.S. and UK; the methods used to deploy passkeys and enroll employees; and the perceived barriers to deployment. Read the report at https://fidoalliance.org/research-state-of-passkey-deployment-in-the-enterprise-a-snapshot-of-deployments-employee-sign-ins-us-uk.
The survey revealed four key findings:
Enterprises understand the value of passkeys for workforce sign-ins. A majority of decision makers (87%) report deploying passkeys at their companies. Of these, 47% report rolling out a mix of device-bound passkeys (on physical security keys and/or cards) and synced passkeys (synced securely across the user’s devices). Organizations are prioritizing passkey rollouts to users with access to sensitive data and applications, including the three most commonly cited priority groups: Those requiring access to IP (39%), users with admin accounts (39%) and users at the executive level (34%). Within these deployments, organizations are leveraging communication, training, and documentation to increase adoption. Passkey deployments are linked to significant security and business benefits. Respondents report moderate to strong positive impacts on user experience (82%), security (90%), help-center call reduction (77%), productivity (73%), and digital transformation goals (83%). Groups that do not have active passkey projects cite complexity (43%), costs (33%), and lack of clarity (29%) about implementation as reasons. This signals a need for increased education for enterprises on rollout strategies to reduce concerns, as there is a correlation between these perceived challenges and the proven benefits of passkeys.“This study is equally encouraging and illuminating as it points to strong willingness and commitment to deploy passkeys to employees – and also is informative in helping FIDO shape resources that we can deliver to help enterprises around the world more quickly and effectively implement their FIDO authentication strategies,” said Andrew Shikiar, CEO and executive director of the FIDO Alliance. “Passkeys can stop AI-generated social engineering attacks in their tracks while also increasing employee productivity and reducing costs associated with help desk support and security breaches. FIDO Alliance is committed to helping more companies around the world realize these benefits by providing actionable passkey implementation guidance and best practices, which this data will help define.”
New phishing and fraud attempts are being used every day, driven in particular by widespread generative AI use. As reflected in the report, enterprise leaders are becoming aware of the limitations of compromisable passwords, and seeing the value of deploying the most secure and user-friendly authentication methods possible. These insights will be leveraged to further remove the perceived and/or real barriers around passkey adoption so more enterprises can experience their benefits on a global scale.
Learn More During FIDO’s March 6 Webcast
The FIDO Alliance will host a webcast on March 6, 2025 at 8am PST to provide further insights into the report methodology, the findings and next steps. The webcast will feature Michael Thelander, senior director of product marketing at Axiad; Katie Björk, director of communications and solution marketing at HID; and Sarah Lefavrais, Authentication devices product marketing director at Thales, along with Megan Shamas, chief marketing officer of the FIDO Alliance. Register here.
Michael Thelander, Axiad’s director of product marketing, thinks the survey results will deliver not just interesting data, but will also provide a path for FIDO2 to become a first class citizen alongside other forms of PKI-based authentication in the enterprise. “Passkey technology has not only matured, but this survey reveals how identity practitioners and strategists are beginning to integrate passkeys with their other workforce authentication methods, across different platforms and device types, to deliver what identity architects and users both want: strong authentication that doesn’t place a ‘friction ‘tax’ on the last step of accessing systems and networks.”
“HID, in collaboration with fellow FIDO Alliance members, launched this survey to gain insights into the priorities of enterprise and security leaders that drive successful passkey implementation. We also aimed to identify the challenges other organizations encounter when integrating FIDO technology into their authentication strategies. HID’s overarching goal is to empower organizations to meet their business objectives by eliminating one of their most significant obstacles: user experience and security challenges linked to passwords,” says Katie Björk, Director of Communications and Solution Marketing.
“Thales is excited to collaborate with the FIDO Alliance for this research, which underscores the growing adoption of passkeys for employee sign-ins,” said Haider Iqbal, Director Product Marketing IAM at Thales. “We’re seeing similar interest from our customers, who recognize the benefits of FIDO authentication for both security and productivity. Thales is committed to enabling organizations to migrate their workforce and customers to passkeys, helping them stay ahead of the curve with secure, seamless and frictionless digital journeys for all users.”
Survey Methodology:
The survey was conducted among 400 decision makers who would be / are involved in passkey deployment in companies with 500+ employees across the UK and the US. The interviews were conducted online by Sapio Research in September 2024 using an email invitation and an online survey. At an overall level results are accurate to ± 4.9% at 95% confidence limits assuming a result of 50%. The survey was produced by the FIDO Alliance Enterprise Deployment Working Group, with underwriting support from Axiad, HID, and Thales.About the FIDO Alliance
The FIDO (Fast IDentity Online) Alliance, www.fidoalliance.org, was formed in July 2012 to address the lack of interoperability among strong authentication technologies, and remedy the problems users face with creating and remembering multiple usernames and passwords. The FIDO Alliance is changing the nature of authentication with standards for simpler, stronger authentication that define an open, scalable, interoperable set of mechanisms that reduce reliance on passwords. FIDO Authentication is stronger, private, and easier to use when authenticating to online services.
About Axiad
Axiad is an identity security company whose products make authentication and identity risk management simple, effective and real. Our credential management systems make MFA defensible, manageable and usable. Our cutting-edge risk solutions help customers identify and quantify risk and fortify their systems against a barrage of new attacks. Learn more at www.axiad.com.
About HID
HID powers the trusted identities of the world’s people, places and things. We make it possible for people to transact safely, work productively and travel freely. Our trusted identity solutions give people convenient access to physical and digital places and connect things that can be identified, verified and tracked digitally. Millions of people around the world use HID’s products and services to navigate their everyday lives, and billions of things are connected through HID’s technology. We work with governments, educational institutions, hospitals, financial institutions, industrial businesses and some of the most innovative companies on the planet. Headquartered in Austin, Texas, HID has over 4,500 employees worldwide and operates international offices that support more than 100 countries. HID is an ASSA ABLOY Group brand. For more information, visit www.hidglobal.com.
About Thales Cybersecurity Products
In today’s digital landscape, organizations rely on Thales to protect what matters most – applications, data, identities, and software. Trusted globally, Thales safeguards organizations against cyber threats and secures sensitive information and all paths to it — in the cloud, data centers, and across networks. Thales offers platforms that reduce the risks and complexities of protecting applications, data, identities and software, all aimed at empowering organizations to operate securely in the digital landscape. By leveraging Thales’s solutions, businesses can transition to the cloud with confidence, meet compliance requirements, optimize software usage, and deliver exceptional digital experiences to their users worldwide.
More on Thales Cybersecurity Products: https://cpl.thalesgroup.com/
More on Thales Group: www.thalesgroup.com
Contact
press@fidoalliance.org
Guest Blog By Colton Wolkins
The DIDComm User Group Meeting is open to anyone interested in using and implementing DIDComm. This is a great place for developers to learn, ask questions, and share experiences!
DIDComm (Decentralized Identity Communication) is rapidly becoming a fundamental protocol for secure, private, and interoperable messaging in decentralized identity systems. As adoption grows, developers and organizations worldwide are implementing DIDComm to enable seamless, trustable communication between digital identities. But navigating the technical landscape can be challenging—this is where the DIDComm User Group comes in.
Different from the DIDComm Working group, composed of DIF members actively contributing to the DIDComm Specification, the DIDComm User Group is an open forum for anyone interested in implementing and using the DIDComm protocol to connect with others, share experiences, and get answers to your questions.
1. See Demonstrations and Presentations from Other DIDComm ImplementersLearn from real-world implementations! The User Group meetings often feature demonstrations and presentations from developers who are actively working with DIDComm. These sessions provide insights into how others are tackling challenges, designing solutions, and deploying DIDComm in various environments.
Recently, a demonstration involving a Raspberry Pi showcased how DIDComm can be used to send messages back and forth with another device privately, securely, and even when both devices are on different networks. Following the demonstration, a short Q&A session was held, leading into a discussion about how DIDComm can mitigate many of the issues that the industry has with IoT devices.
2. Receive Help with Your DIDComm ImplementationAre you facing technical issues while integrating DIDComm? Do you have questions about the DIDComm specification? The User Group meetings offer a collaborative space where developers can ask questions and receive support from peers and experts. Whether it’s debugging a problem, discussing best practices, or understanding implementation nuances, the User Group is an excellent place to get practical guidance from those who have been there before.
Just this last week, the Credo project reached out to the User Group due to a UI/UX concern that they had. After some discussion, the group came up with a better solution than the one proposed, and is coordinating follow-up.
3. Learn About DIDComm Adoption Across Industries and RegionsDIDComm is being adopted across industries, from finance and healthcare to travel and enterprise security. By attending these meetings, you gain insight into how organizations worldwide are leveraging DIDComm to enhance security and privacy in digital communications. Understanding the breadth of adoption can help you identify potential partnerships, new use cases, and emerging trends that may influence your own projects.
Read more about DIDComm and deployment examples from around the world.
Take Action: Join the Next Meeting User Group meetingJoin us and be part of the conversation shaping the future of decentralized identity communication! To accommodate a global audience, there are two meeting times—one convenient for North American participants and another scheduled for APAC-friendly time zones.
DIF DIDComm User Group meeting (US Time Zones) DIF DIDComm User Group meeting (APAC/EU Time Zones)Interested in shaping the DIDComm standard? Contact DIF about becoming a member and contributing to the DIDComm Working Group.
To stay updated on upcoming meetings and receive invitations, check the DIF events calendar or subscribe to the DIF newsletter.
Privacy-Preserving Traceability with Digital Product Passports & Data Spaces Interoperability
The Consumer Packaged Goods (CPG) industry is undergoing a rapid transformation. Consumers demand accountability and sustainability—over 70% want detailed product information—while new regulations like the EU’s Ecodesign for Sustainable Products Regulation (ESPR) and the Uyghur Forced Labor Protection Act (UFLPA) reshape the landscape. Big data leaks and supply chain disruptions have become the norm. As regulations grow increasingly stringent, consumer trust falters, and operational costs skyrocket, a paradigm shift is essential. The industry needs verifiable, trusted data seamlessly integrated — and a system that enables seamless, permissioned data exchange across the value chain.
Digital Product Passports (DPPs) are a key component of this transformation. A DPP is a secure, globally unique digital record that stores verifiable information about a product throughout its lifecycle—from raw materials to disposal. Widespread adoption of DPPs is critical for building trust, strengthening value chain resilience, and ensuring regulatory compliance. According to the EU, which now requires DPPs for nearly all products sold in the EU, “the DPP is designed to close the gap between consumer demands for transparency and the current lack of reliable product data.”
However, DPPs alone are not enough. To truly unlock their potential, we need a standardized framework for data spaces interoperability—a system that enables direct, seamless transactions between participants across industries. Modern value chains involve collaboration among thousands of stakeholders, including enterprises, regulators, and consumers. When these value chains function efficiently, they improve product quality, optimize resource allocation, and lower costs. Yet, despite technological advancements, achieving this level of coordination remains a challenge.
Introduction to MOBI’s Web3 Infrastructure
Today, many organizations rely on third-party platforms and proprietary applications for data exchange, resulting in fragmented, siloed systems. This lack of interoperability limits collaboration and hinders critical processes, from ensuring product safety and ethical sourcing to enabling proper recycling and end-of-life management. Recognizing this, MOBI was formed in 2018 as a neutral convener for organizations to develop standards and infrastructure for DPPs and data spaces interoperability. MOBI’s Web3 Infrastructure, comprising Citopia Decentralized Marketplace (DM) and the Integrated Trust Network (ITN), offers a secure, decentralized marketplace framework with standardized communication protocols. Think of it as a private internet, wherein entities can engage in secure, autonomous, encrypted transactions.
Citopia DM and the ITN are built for Self-Sovereign Data and Identity (SSDI). Each participant owns and manages their own Self-Sovereign Digital Twin (SSDT), which stores two things:
a globally-unique Decentralized Identifier (DID), which is anchored and validated in the ITN Verifiable Credentials (VCs) used for transactions in Citopia DMThis SSDT allows participants to engage in standardized, secure, and compliant transactions on Citopia DM with selective disclosure (data only goes to intended recipients). Both Citopia and ITN are system and cloud-agnostic, meaning stakeholders can seamlessly communicate while retaining their existing systems and web services. This removes the need for costly one-off integrations and eliminates prohibitive onboarding/maintenance costs, providing a robust foundation for data spaces interoperability.
Citopia DM unlocks new possibilities for DPPs by enabling direct peer-to-peer transactions between value chain participants. Removing reliance on third-party intermediaries lowers costs, increases traceability, and drives compliance with emerging regulatory requirements. Companies leveraging MOBI’s infrastructure can streamline operations, reduce data silos, and improve their ability to meet evolving consumer and regulatory demands. For regulators and consumers of CPGs, Citopia DM offers easy access to trusted DPPs issued by companies. For regulators, this makes it easy to verify compliance claims. For consumers, access to DPPs can inspire confident purchasing decisions and boost brand loyalty.
Laying the Foundation for Generative AI Applications
MOBI is going to take this one important step further. The robust infrastructure it has built not only addresses the critical need for tracebility and data exchange but also lays the foundation for powerful generative AI applications. By leveraging the rich, contextual data housed within DPPs and the seamless data flow facilitated by Citopia and SSDTs between rich data spaces, generative AI can move beyond generic analyses to provide participant-specific value. Imagine a scenario where:
Consumers can interact with generative AI agents to understand the complete lifecycle of a product they are considering purchasing. These agents, equipped with DPP data accessed through Citopia, can answer nuanced questions about a product’s sustainability footprint, ethical sourcing, or even detailed ingredient breakdowns, generating responses tailored to the individual consumer’s values and concerns. For example, a consumer with specific dietary restrictions or sustainability preferences could ask a generative AI agent: “Show me the carbon footprint and allergen information for the ingredients in this cereal, and suggest alternatives with lower environmental impact and no nuts.” The agent, accessing the DPP via Citopia, can generate a personalized, privacy-preserving response, drawing from verified data and offering actionable recommendations. CPG Companies can utilize generative AI to optimize their operations and gain deeper market understanding. Generative AI agents can analyze aggregated and anonymized DPP data from across the value chain within Citopia to identify supply chain inefficiencies, predict potential disruptions, or personalize marketing campaigns with unprecedented precision. For instance, a generative AI agent could analyze DPP data to recommend optimal sourcing strategies based on real-time insights into material availability, ethical considerations, and environmental impact, while ensuring compliance with regulations like UFLPA. Furthermore, generative AI can assist in generating customized sustainability reports or proactively flag potential regulatory compliance issues based on DPP data, streamlining operations and reducing risks. Regulators can employ generative AI agents to efficiently monitor and verify compliance with evolving regulations like the ESPR. These agents can be granted permissioned access to DPP data within Citopia, enabling them to automatically audit product information against regulatory requirements and identify potential non-compliance issues at scale. Generative AI can generate summaries of compliance status across product categories or highlight specific areas needing further investigation, significantly enhancing regulatory oversight and consumer protection.The key to enabling these generative AI applications lies in the architecture of MOBI’s Web3 infrastructure. SSDTs with secure identities backed by the ITN ensure that each participant retains control over their data, can choose what data to disclose (and to whom), and can securely exchange verifiable data in a regulatory-mandated Zero Trust Architecture.
Generative AI agents operating within this framework can be designed to access and process data in a privacy-preserving manner. For example, they can utilize techniques like differential privacy or federated learning to generate insights from aggregated DPP data without needing to access or expose the raw, sensitive data of individual participants. Selective disclosure of VCs ensures that only authorized agents receive the necessary data points, minimizing the risk of data breaches or misuse.
Conclusion
This means that the combination of DPPs and interoperable data spaces facilitated by MOBI’s infrastructure and generative AI represents an entirely new business phase for the CPG industry. It moves beyond basic traceability to enable a future where data becomes a dynamic tool for generating participant-specific value, fostering deeper consumer trust, optimizing business operations, and ensuring robust regulatory compliance – all within a secure and privacy-respecting ecosystem.
MOBI’s Web3 infrastructure is, therefore, a game-changer for the CPG industry, offering a scalable, decentralized approach to data verification and interoperability while enabling participant-specific insights and recommendations using generative AI. As consumer expectations evolve and regulatory landscapes shift, embracing decentralized, self-sovereign solutions will be the key to sustainable growth and competitive advantage. The future of CPG lies in verifiable, trusted data, and MOBI is leading the way in building the infrastructure to support it.
The post The Digital Future of Consumer Packaged Goods (CPG): Embracing Transparency and Insights with Digital Product Passports and Data Spaces Interoperability first appeared on MOBI | The New Economy of Movement.
Biometrics are connecting with payment credentials, whether through numberless credit cards and banking apps or passkeys, as the concrete steps towards linking digital identity and payment systems shows up as a major theme in the week’s most-read stories on Biometric Update. Mastercard announced it will ditch the familiar credit card number in favor of on-device biometrics and tokenization, while everyone in digital wallets, from the EUDI Wallet Consortium to Fime and Mattr to Apple is looking at how to bring together identity and payments, and Visa arguing for the role of passkeys in a converged digital ID and payments ecosystem.
Passkey authentication replaces traditional passwords with a pair of cryptographic keys—public and private. The private key stays on the user’s device, while the public key sits on the server. During login, the server issues a challenge that only the private key can solve, and the response gets verified using the public key. No passwords are transmitted or stored, which reduces the attack surface significantly. Password leaks and brute-force attempts become non-issues because there is no static secret to steal or guess.
FIDO2 is a joint initiative by the FIDO Alliance and the World Wide Web Consortium (W3C) aimed at delivering streamlined, strong authentication without relying on passwords. It defines a set of technical components: WebAuthn and CTAP2 (Client to Authenticator Protocol). WebAuthn standardizes how a web application interacts with an authenticator—often a platform feature like a secure enclave on a phone or a hardware security key. CTAP2 governs how that authenticator communicates with the client device, such as a laptop or smartphone.
Username and password authentication is a fixture in healthcare but one that continues to hinder operations and put patient privacy – and care – at risk. In just the first three months of 2024, there were over 116 data breaches in the healthcare industry, allowing cybercriminals to access private patient data, medications, clinical records, Social Security numbers, and more by employing tactics like phishing emails and malware.
As a result, passwordless authentication is steadily gaining traction, enabling healthcare facilities to implement more secure user verification and streamline access management.
The transition to passwordless won’t happen overnight. However, we can expect continued adoption of passwordless methods over the next decade, as the challenges of traditional passwords become too glaring to ignore in this mission-critical industry.
Traditional username and password authentication remains a standard practice in healthcare, but it increasingly compromises operational efficiency, patient privacy and care quality. In the first quarter of 2024 alone, over 116 data breaches exposed sensitive patient data, including medications, clinical records and Social Security numbers. Cybercriminals use tactics like phishing and malware to exploit these vulnerabilities, underscoring the need for stronger authentication measures. As a response, passwordless authentication is gaining traction, offering a more secure and streamlined approach to access management. Although the transition will take time, the next decade will likely see widespread adoption of passwordless solutions as the limitations of passwords become too costly to ignore.
The FIDO Alliance recently held a pivotal one-day seminar exploring the transformative power of passkey authentication in Australia and beyond.
This dynamic event attracted 150-200 influential leaders and decision-makers from government, consumer, and enterprise sectors to explore the future of secure online identity in Australia, New Zealand, and overseas. The agenda covered use cases, case studies, and the latest data and collaboration happening to implement passkeys for consumers, workers, and governments regionally and around the world.
View the presentations below: FIDO Alliance – Simpler Stronger Authentication.pptx from FIDO Alliance From Authentication to Assurance – Managing risk in passkeys and beyond.pptx from FIDO Alliance From Requirements to Rollout – VicRoads’ Experience with Passeys.pptx from FIDO Alliance How to Simplify and Accelerate Passkey Adoption.pptx from FIDO Alliance IdentityVerification IDV + Passkeys.pptx from FIDO Alliance Insights from Large-Scale B2C Passkey Deployments.pptx from FIDO Alliance Passkeys – Why Moving Now Makes Sense.pptx from FIDO Alliance FIDO and Government:How Policymakers and Regulators are Thinking About Authentication.pptx from FIDO AllianceThe Human Colossus Foundation (HCF) recently participated in the 2025 Geneva Winter Summit, a global gathering of AI experts, policymakers, and innovators hosted by the AI for Developing Countries Forum (AIFOD) at United Nation office at Geneva. The summit focused on AI’s potential for developing countries to drive equitable digital development, culminating in the AIFOD Geneva Winter Summit Declaration 2025, which charts a path for inclusive AI progress.
At the core of HCF’s contribution were two topics:
The importance of access to accurate data and its provenance in AI training especially for small and less digitalize nations.
The role of distributed governance in fostering ethical AI Agents ecosystems which can give the edge for developing countries without need to build massive data centers for huge AI models.
The Power of Accurate Data and Provenance in AI TrainingFor AI to serve communities effectively, it require access to massive data sets, best localize to avoid biases and misinformation. Unfortunately a lot of developing countries does not have such access, in many cases digital transformation just starting. This allows them to better prepare for upcoming needs of the data, they can start already shaping digital policies and strategies to invest from day one towards more accurate data ecosystems which could give them the edge relaying more on quality then quantity. AI models trained on verifiable, and well-documented data would be more accurate in their functions. Data provenance—the ability to trace data back to its source—is essential for ensuring AI models are transparent, reliable, and compliant with global data standards but as well allowing citizens and countries to verify potential bias and misinformation in AI-generated insights.
At the summit, HCF emphasized how Dynamic Data Economy (DDE) principles enable organizations to structure and verify data origins, ensuring AI models are trained on high-quality, trustworthy inputs.
Why does data provenance matter?
Enables to build trust and accountability in AI decision-making
Helps organizations comply with data protection regulations (e.g., GDPR, HIPAA)
Prevents bias and misinformation in AI-generated insights
HCF advocates for data-oriented architectures, where individuals and businesses can validate their data before it is used to train AI, ensuring a more transparent and responsible ecosystem.
Distributed Governance: The Key to Ethical and Sustainable AIGovernance models must evolve to keep pace with AI’s rapid growth. Traditional centralized governance structures often struggle to provide the inclusivity and adaptability needed for cross-border AI systems.
At the summit, HCF promoted distributed governance model to help with classification standards, cross-jurisdictional ethical alignment, and enhanced data transparency. A meta-governance framework would equip stakeholders with accurate, verifiable, and accessible information, enabling informed AI adoption that aligns with local regulations and ethical values. By promoting fairness, transparency, and accountability, this framework supports a responsible, sustainable AI-driven digital economy.
How can distributed governance benefit AI ecosystems?
Inclusive decision-making – Empowering communities, businesses, and policymakers to shape AI’s future
Enhanced accountability – Avoiding centralized control that can lead to bias or exploitation
Interoperability across jurisdictions – Helping AI operate ethically across borders without conflicting regulations
Through distributed governance models, AI can serve global communities fairly, ensuring that technological advancements reflect a shared ethical and socio-economic vision.
The 2025 AIFOD Declaration: A Commitment to Inclusive AIThe Geneva Winter Summit Declaration 2025 serves as a blueprint for AI governance, highlighting the need for:
Equitable access to AI advancements for developing nations
A balance between regulation and innovation
Stronger international collaboration to ensure ethical AI deployment
HCF stands committed to advancing these goals through the Dynamic Data Economy, ensuring AI serves all communities equally, ethically, and sustainably.
Looking Ahead: HCF’s Vision for Ethical AI DevelopmentThe Human Colossus Foundation will continue advocating for accurate data management, distributed governance, and responsible AI adoption. Through collaboration with global stakeholders, we aim to shape AI ecosystems that prioritize transparency, inclusivity, and sustainability.
26. November 2024
Im Change Hub Berlin
Hardenbergstraße 32, 10623 Berlin
Am 26. November fand die Zweite Digital Society Conference (DSC2) im Change Hub in Berlin statt. Am 27. November fand im Rahmen der DSC2 ein Workshop zum Thema Organisationsidentitäten statt.
Wie im vergangenen Jahr lag der Schwerpunkt der DSC auf dem Thema Digitale Nachweise und Identitäten. In Keynote, Panels und interaktiven Worksessions wurden am 26.11.24 Genese und Rahmenbedingungen der europäischen digitalen Identitätslösung und der EUDI-Wallet erläutert und Fakten vermittelt. Auch sind wir der Frage nachgegangen, wie sich diese europäische Lösung im internationalen Kontext einordnet und was uns als Gesellschaft in den kommenden Jahren konkret erwartet. Denn die hoheitliche eID ist erst der Beginn einer schnellen und anspruchsvollen Entwicklung, in deren Folge auch Unternehmen eine digitale ID erhalten werden, um den Austausch von Daten und die Sicherheit der Identifikation von Entitäten zu gewährleisten.
An dieser Stelle danken wir den Referierenden, Panelisten und insbesondere unseren Förderern, ohne die wir die DSC2 nicht hätten realisieren können.
DSC2 – Workshop 26.11.24Digitale Nachweise und Identitäten – sind Sie bereit?
Jörg Fischer, Bundesdruckerei
Zu folgenden Themen bildeten sich Worksessions:
Anbindung von Kommunen und kleinen Unternehmen Wiki Data /Organisationsidentitäten / Definition ID-Diebstahl / Betrug Geschäftsmodelle / Payment Integration SDI-Initiativen außerhalb der EU / Interoperabilität Killer-Applikationen für Europäische Wallet (EUDI-Wallet) Und dann gab es noch eine Worksession unter dem augenzwinkernden Header „Dackel-Club“: Hier wurden Nutzungen und Anwendungen digitaler Identitäten diskutiert, die keine hoheitliche Identität benötigen. Zum Beispiel die Mitgliedschaft im Dackel-Club. Dies mag kurios klingen, doch wenn bedacht wird, dass mehr als 37 % der Bevölkerung der Bundesrepublik in Vereinen engagiert ist, so erscheint dieses Anwendungsfeld durchaus interessant, um Adaption und den Umgang mit Wallets voranzubringen.Handout Wegweiser:
Schließlich wurde der Wegweiser Digitale Identitäten und Nachweise vorgestellt. Dieser ist Produkt der Zusammenarbeit innerhalb der Forschungsprojekte Sichere digitale Identitäten des BMWK (SDI) und steht nun zum Download zur Verfügung. Ziel des Wegweisers ist es, einen niederschwelligen Einstieg in die Begriffswelt digitaler Nachweise zu finden. Erläutert sind die Begriffe Wallet, digitale Identität, digitaler Nachweis, Signatur. Die Nutzung des Wegweisers ist frei.
DSC2 – Workshop 27.11.24Am zweiten Tag der DSC2 fand ein Fachworkshop statt, der ganz dem Thema Organisationsidentitäten (OrgID) gewidmet war. Zunächst wurde in Vorträgen der aktuelle Sachstand präsentiert. Dieser diente als Grundlage für die anschließende Arbeit in zwei großen Fachgruppen, deren Ziel es war, insbesondere die offenen Fragen und Punkte zu den Themenkreisen Zugang, Datenaustausch und technische Ausgestaltung weiter zu spezifizieren.
20241120_EUDI_Wallet_Architecture
The Food Safety Modernization Act (FSMA) compliance deadline is approaching quickly, giving companies less than a year to meet new food safety and traceability requirements. But beyond compliance, why does traceability matter?
In this episode, Wiggs Civitillo, Founder & CEO of Starfish, joins hosts Reid Jackson and Liz Sertl to discuss how product traceability can streamline recalls, reduce food waste, and build consumer trust. Inconsistent data and lack of interoperability are some of the biggest challenges companies face in food traceability. Starfish addresses these challenges by enabling secure, seamless data sharing across the supply chain.
Tune in to hear FSMA 204 explained and discover solutions to help companies stay compliant.
In this episode, you’ll learn:
Practical solutions to meet FSMA 204 requirements efficiently
The impact of real-time data on food safety monitoring
How companies can use traceability to build consumer trust
Jump into the conversation:
(00:00) Introducing Next Level Supply Chain
(01:41) Challenges and lessons from the IBM Food Trust
(05:25) How Starfish connects supply chains
(13:58) Recalls, food safety, and consumer trust
(19:19) Understanding FSMA 204 and compliance
(25:06) Benefits of product traceability
(30:39) Wiggs’ favorite tech tool
Connect with GS1 US:
Our website - www.gs1us.org
Connect with the guest:
Wiggs Civitillo on LinkedIn
Guest blog by Greg Bernstein, Dave Longley, Manu Sporny, and Kim Hamilton Duffy
Following the IETF/IRTF Crypto Forum Research Group’s adoption of the BBS Blind Signatures and BBS per Verifier Linkability (“BBS Pseudonym”) specifications, this blog describes historical context and details of cryptographic pseudonyms, as well as the privacy features they enable.
The discussion of cryptographic pseudonyms for privacy preservation has a long history, with Chaum’s 1985 popular article “Security without identification: transaction systems to make big brother obsolete” (Chaum1985) addressing many of the features of such systems such as unlinkability and constraints on their use such as one pseudonym per organization and accountability for pseudonym use. Although Chaum’s proposal makes use of different cryptographic primitives than we will use here, one can see similarities in the use of both secret and “public” information being combined to create a cryptographic pseudonym.
Lysyanskaya’s 2000 paper (Lysya2000) also addresses the unlinkable aspects of pseudonyms but also provides protections against dishonest users. In addition they provide practical constructions similar to those used in our draft based on discrete logarithm and sigma protocol based ZKPs. Finally as part of the ABC4Trust project three flavors of pseudonyms were defined:
Verifiable pseudonyms are pseudonyms derived from an underlying secret key. Certified pseudonyms are certified pseudonyms derived from a secret key that also underlies an issued credential. Scope-exclusive pseudonyms are verifiable pseudonyms that are guaranteed to be unique per scope string and per secret key. Figure 1: Types of PseudonymsThe BBS based pseudonyms in our draft are aimed primarily at providing the functionality of the pseudonym flavors 2 and 3 above.
How BBS is Fundamentally Better for Solving Dishonest Holder FraudBeyond the usual anti-forgery and tamper-evident protections that digital signatures provide, there are two different types of credential fraud that unlinkable credentials need to mitigate: third-party fraud and first-party fraud.
Third-party fraud is when one party uses another party's credentials without their knowledge or approval. This involves the theft of an honest holder's credentials.
First-party fraud is when a legitimate credential holder intentionally allows other parties to covertly use their credentials. This involves not the theft of credentials, but rather, a dishonest holder.
BBS signatures provides two different security features, one to address each of these cases:
1. Holder Multifactor AuthenticationThis security feature binds a BBS credential to a particular secret that only the holder knows and that an honest holder will not share access to nor use of. When a holder presents a BBS credential, a verifier can require that the holder prove use of this secret, thereby enforcing this protection on the holder's behalf. This is an anti-theft mechanism for mitigating third-party fraud.
Note: This feature is sometimes called "holder binding", but it might be more accurately understood as "Holder Multifactor Authentication". This is because the holder is not themselves bound to the credential and it is not a mitigation for first-party fraud. It instead works like another MFA device does when performing authentication. If a dishonest holder shares that MFA device or an API to use it, then someone else can unlinkably present their credential (with the holder's approval -- perhaps even for a fee). Stopping dishonest holders from doing this is a significant challenge that involves locking down users' hardware and software – and a single failure in this scheme could potentially result in an unlimited number of unlinkable, fraudulent presentations. This feature is therefore not a protection for verifiers, but rather one for honest holders against theft. With this in mind, it can be implemented such that a holder is free to use software or hardware of their choice to provide the protection, without requiring specific approval from the issuer.
2. PseudonymsThis security feature binds a BBS credential to a secret that is constructed from inputs from both the holder and the issuer. When a holder presents a BBS credential, a verifier can require that the holder present a pseudonym that is based on this secret and a contextual identifier. Each time the same BBS credential is presented using the same contextual identifier, the pseudonym will be the same. This prevents a dishonest holder from covertly enabling an unlimited number of unlinkable presentations of their credential by any parties they authorize. It is an anti-cloning mechanism for mitigating first-party fraud.
Contextual identifiers can be any value, but need to be agreed upon between the verifier and the holder for a given use case. To give some examples for the presentation of credentials on the Web, a contextual identifier could be a URL like a Web origin (e.g., "https://website.example"), a Web origin with a certain path (e.g., "https://website.example/groups/wind-surfing"), or protocol-defined combination of a URL and a time range, allowing for pseudonyms to be "forgotten" or "rotated" after sufficient time has passed.
Note: Other credential schemes aim to mitigate first-party fraud by limiting the device and software that a holder chooses to engage with, sometimes referred to as "holder binding". With this approach, the holder's own device and software are trusted to be leveraged against them to constrain their behavior. This approach requires device and software allow lists, trust framework management, significant additional protocol security considerations, and ultimately means that the issuer chooses the holder's device and software (or provides them with a list of acceptable options from which they may choose). This has additional side effects, such as centralizing and vendor lock-in effects on the marketplace of devices and software. Finally, with this approach, it also only takes a single dishonest holder to thwart the protections of the device or software (that they have physical access to) to re-enable an unlimited number of unlinkable fraudulent presentations of a valid credential in the ecosystem. Catching this behavior after the fact logically requires being able to link presentations once again, one way or another, which would defeat the privacy aims of the scheme.
Overview: BBS Signature Bound PseudonymsThe BBS signature scheme, the foundation for BBS Pseudonyms, is based on a three-party model:
Signer (also known as issuer): Issues credentials Prover (also known as holder or user): Receives credentials Verifier: Validates proofsA prover obtains a BBS signature from a signer over a list of BBS messages and presents a BBS proof (a proof of possession of a BBS signature) along with a selectively disclosed subset of the BBS messages to a verifier.
Figure 2: Example of BBS Signature FlowEach BBS proof generated is unlinkable to other BBS proofs derived from the same signature and from the BBS signature itself. If the disclosed subset of BBS messages are not linkable then they cannot be linked by their cryptographic presentation alone.
Note: the language used in this section is intentionally informal; for a more precise explanation of phrases like “…cannot be linked by their cryptographic presentation alone”, please see Lysya2000.
BBS pseudonyms extend the BBS signature scheme to “bind” a “cryptographic pseudonym” to a BBS signature retaining all the properties of the BBS signature scheme:
A short signature over multiple messages Selective disclosure of a subset of messages from prover to verifier Unlinkable proofs.In addition BBS pseudonyms provide for:
An essentially unique identifier bound to a signature/proof of signature whose linkability is under the control of the prover in conjunction with a verifier or group of verifiers. Such a pseudonym can be used when a prover revisits a verifier to allow a verifier to recognize the prover when they return or for the prover to assert their pseudonymous identity when visiting a verifier Assurance of per-signer uniqueness, i.e., the signer assures that the pseudonyms that will be guaranteed by the signature have not been used with any other signature issued by the signer (unless a signature is intentionally reissued). The signer cannot track the prover presentations to verifiers based on pseudonym values. Verifiers in separate “pseudonym groups” cannot track prover presentations. How BBS Pseudonyms Work OverviewTo realize the above feature set we embed a two part pseudonym capability into the BBS signature scheme. The pseudonym’s cryptographic value will be computed from a secret part, which we call the nym_secret and a part that is public or at least shared between the prover and one or more verifiers. The public part we call the context_id.
A simplified overview of this flow is shown in Figure 3:
Figure 3: Simplified overview of BBS Pseudonym flow IssuanceTo bind a pseudonym to a BBS signature we have the signer utilize Blind BBS signatures and essentially sign over a commitment to the nym_secret. Hence only a prover that knows the nym_secret can generate a BBS proof from the signature (and also generate the pseudonym proof).
The prover chooses their (random) prover_nym and commits to it. They then send the commitment along with a ZKP proof that the prover_nym makes this commitment. The signer verifies the commitment to the prover_nym then generates the signer_nym_entropy and “adds” it to the prover_nym during the signature process. Note that this can be done since we sign over the commitment and we know the generator for the commitment.
Figure 4: Credential Issuance detailed flowAs in Lysya2000 we are concerned with the possibility of a dishonest user and hence require that that nym_secret = prover_nym + signer_nym_entropy be the sum of two parts where the prover_nym is a prover's secret and only sent to the signer in a blinding and hiding commitment. The signer_nym_entropy is “added” in by the signer during the signing procedure and sent back to the prover along with the signature.
VerificationThe pseudonym is calculated from nym_secret and context_id using discrete exponentiation.
nym_secret: A two-part secret combining: prover_nym: Generated and known only by the prover signer_nym_entropy: Contributed by the signer during signature context_id: A public identifier that is shared between the prover and one or more verifiersThis is similar to the computations in Lysya2000 and ABC2014. The pseudonym is presented to the verifier along with a ZKP that the prover knows the nym_secret and uses it and the context_id to compute the pseudonym value. A similar proof mechanism was used in Lysya2000. See chapter 19 of BS2023 for an exposition on these types of ZKPs.
Figure 5: BBS Pseudonym verification flowSee the appendix for a detailed diagram showing the complete BBS pseudonym issuance and verification flow.
BBS Pseudonym Example Applications Certifiable PseudonymsCertifiable pseudonyms work as follows: the prover creates unique pseudonyms based on context_ids they choose. While the nym_secret is guaranteed unique to and by the issuer, the issuer never learns its value.
When using the pseudonym, the prover presents both the context_id and pseudonym to identify themselves, along with any attributes (messages) they choose to reveal. No one else without the nym_secret and signature can produce a proof that they “own” the pseudonym. The prover can create as many different, unlinkable pseudonyms by coming up with different values for the context_id.
Figure 6: In this example of Certifiable Pseudonyms, the prover chooses different a context_id per service they interact with Scope Exclusive PseudonymsWith scope exclusive pseudonyms, the verifier or group of verifiers require the use of a specific context_id. This allows the verifier (or group of verifiers) to track visits by the prover using this credential/pseudonym. A verifier can limit data collection, i.e. data retention minimization, by periodically changing the context_id since the pseudonyms produced using different context_ids cannot be linked. For example a context_id like “mywebsite.com/17Nov2024” that changes daily means the verifier could only track visits daily.
Figure 7: Scope exclusive pseudonyms where verifiers, or groups of verifiers, require a specific context_id. In this example, scope is set a per-day levelScope Exclusive Pseudonyms with Monitoring
Scope exclusive pseudonyms with monitoring enable regulated privacy in scenarios requiring third-party oversight. Consider a system for purchasing controlled chemicals: A prover with appropriate credentials uses a different pseudonym with each vendor. This prevents vendors from colluding on prices or learning secret formulas by tracking a prover's complete purchase history across different vendors. At the same time, for regulatory compliance and public safety, vendors must report all purchases to a monitor, including the pseudonym used and their vendor-specific context_id. The prover shares their nym_secret only with the monitor, allowing the monitor to link different pseudonyms to the same prover when necessary. This separation between nym_secrets and other credential-binding secrets is crucial - it enables regulatory oversight without introducing additional cross-tracking by vendors.
Figure 8: In this example, scope exclusive pseudonyms with monitoring are used to enable regulatory compliance while not reusing pseudonyms across vendors A Short Selection of References [Chaum1985] D. Chaum, “Security without identification: transaction systems to make big brother obsolete,” Commun. ACM, vol. 28, no. 10, pp. 1030–1044, Oct. 1985, doi: 10.1145/4372.4373. [Lysya2000] A. Lysyanskaya, R. L. Rivest, A. Sahai, and S. Wolf, “Pseudonym Systems,” in Selected Areas in Cryptography, vol. 1758, H. Heys and C. Adams, Eds., Lecture Notes in Computer Science, vol. 1758, Berlin, Heidelberg: Springer Berlin Heidelberg, 2000, pp. 184–199, doi: 10.1007/3-540-46513-8_14. [ABC2014] P. Bichsel et al., “D2.2 – Architecture for Attribute-based Credential Technologies – Final Version,” Aug. 2014. [Online]. Available: https://abc4trust.eu/download/Deliverable_D2.2.pdf. [Accessed: Feb. 10, 2025]. [BS2023] D. Boneh and V. Shoup, “A Graduate Course in Applied Cryptography,” Version 0.6. [Online]. Available: https://toc.cryptobook.us/book.pdf. [Accessed: Feb. 10, 2025]. [IETF-BBSblind-00] Internet Engineering Task Force, “BBS Blind Signatures – draft-irtf-cfrg-bbs-blind-signatures-00,” [Online]. Available: https://www.ietf.org/archive/id/draft-irtf-cfrg-bbs-blind-signatures-00.html#. [Accessed: Feb. 10, 2025]. [IETF-BBSlink-00] Internet Engineering Task Force, “BBS Per-Verifier Linkability – draft-irtf-cfrg-bbs-per-verifier-linkability-00,” [Online]. Available: https://www.ietf.org/archive/id/draft-irtf-cfrg-bbs-per-verifier-linkability-00.html#. [Accessed: Feb. 10, 2025]. Appendix: Complete BBS Pseudonym Flow (Issuance & Verification) Next StepsDive deeper into the BBS Blind and Per Verifier Linkability specifications:
Review bbs-blind-signatures and bbs-per-verifier-linkability
Consider implementing the specifications to further evaluate the specs
Provide feedback in the IETF Crypto Forum Research Group
Join DIF’s Applied Crypto Working Group for ongoing meetings
Contribute to W3C Data Integrity Specification Development:
To evaluate how BBS can be used in the context of W3C Verifiable Credentials, join the W3C VC Data Integrity Community Group
Stay up to date:
Subscribe to the DIF blog to receive updates
Thales has unveiled a new solution designed to streamline the deployment and management of FIDO security passkeys for large-scale implementations. The OneWelcome FIDO Key Lifecycle Management solution enables organizations to efficiently manage the complete lifecycle of FIDO keys while transitioning to passwordless authentication systems. The launch follows Thales’ previous efforts in passwordless authentication, expanding their enterprise security portfolio.
The solution provides IT teams with comprehensive control over FIDO key management, from initial enrollment through to eventual revocation. By allowing IT departments to pre-register keys and handle lifecycle management tasks, the platform helps reduce the burden on end users while maintaining security standards. The approach supports recent FIDO Alliance guidelines for enterprise passkey implementation, which emphasize the importance of streamlined deployment processes.
A key feature of the solution is its integration with Microsoft Entra ID through FIDO2 provisioning APIs, enabling organizations to pre-register Thales FIDO keys for their users. The integration is particularly relevant for enterprises using Microsoft 365, providing secure authentication capabilities from initial deployment. The feature arrives as Microsoft implements mandatory multi-factor authentication across its enterprise platforms.
We’re excited to welcome two partners to our Matchbox Program this year: Trans Youth Initiative-Uganda (TYI-Uganda) and TechHer.
The post Announcing our two new Matchbox partners appeared first on The Engine Room.
Fifteen years ago, The Onion published this story:
Study Finds Paint Aisle At Lowe’s Best Place To Have Complete Meltdown
Now it’s the vitamin aisle at CVS:
Enshittification at work.
When Cory Doctorow coined enshittification, he was talking about how online platforms such as Google, Amazon, and TikTok get shitty over time. Well, the same disease also afflicts many big retail operations, especially ones that flood their zones with discounts and other gimmicks, enshittifying what marketers call “the customer experience” (aka CX).
Take the vitamin aisle, above. The only people who will ever get down on all fours to read the yellow tags near the floor are the cursed employees who have to creep the length of the aisle putting them there.
For customers, the main cost of shopping at CVS is cognitive overhead. Think about—
All those yellow stickies All the slow-downs at check-out when you wave your barcode at a reader, punch your phone number into a terminal, or have a CVS worker to do the same All the conditionalities in every discount or “free” offer that isn’t All the yard-long receipts, such as this one:And the app!
OMFG, all we really need the damned app for is the one CVS function our life depends on: the pharmacy.
To be fair, the app doesn’t suck at the basics (list of meds, what needs to be refilled, etc.). But it does suck at helping you take advantage of CVS’s greatest strength: that there are so many of them. Specifically,
While that’s down a bit from the peak in 2021, CVS is still the Starbucks of pharmacies. And while they are busy laying off people while investing in tech, you’d think they would at least make it easy to move your prescription from one store to another, using the app. But noooo.
What the app is best for is promotional crap. For example, this:
Look at the small gray type in the red oval: 198 coupons!
After I scroll down past the six Extrabucks Rewards (including the two above), I get these:
First, who wants a full-priced item when it seems damn near everything is discounted?
Second, you’d think after all these years of checking out with my Extracare barcode, and the app shows me (under “Buy It Again”) all the stuff I’ve purchased recently, that CVS would know I am a standard-issue dude with no interest in cosmetics. So why top the list of coupons with that shit? I suppose it’s to make me scroll down through the other 178 coupons to find other stuff I might want at a cheaper price.
I just did that and found nothing. Why? Because most of the coupons are for health products I already bought or don’t need. (I’m not sick right now.) Also, almost all of the coupons (as you see) expire three days from now.
Now think about the cognitive and operational overhead required to maintain that whole program at CVS. Good gawd.
And is it necessary? At all? When you’re the Starbucks of pharmacies?
Without exception, all loyalty programs like this one are coercive. They are about trapping and milking customers.
But do stores need them? Do customers? Does CVS? Really? When its customers are already biased by convenience.
Pro Tip: Real loyalty is earned, not coerced.
Want your store, or your chain, to be loved? Take some lessons from the most loved chain in the country: Trader Joe’s. In a chapter of The Intention Economy called “The Dance,” I list some reasons why TJ’s is loved by its customers. My main source for that list is Doug Rauch, the retired president of TJ’s, where he worked for 31 years. Here are the top few:
They never use the word “consumer.” They call us “customers,” “persons,” or “individuals.” They have none of what Doug calls “gimmicks.” No loyalty programs, ads, promotions, or anything else that manipulates customers, raises cognitive overhead or insults anyone’s intelligence. In other words, none of what marketing obsesses about. “Those things are a huge part of retailing today, and have huge hidden costs,” Doug says. (I think the company’s biggest marketing expense is its float in the Rose Parade.) They never discount anything, or say anything is “on sale.” Those kinds of signals add more cognitive overhead. TJ’s wants customers not just to assume, but to know. A single price takes care of that. They have less than no interest in industry fashion. TJ’s goes to no retail industry meetings or conferences, belongs to no associations, and avoids all settings where the talk is about gaming customers. That’s not TJ’s style because that’s not its substance. They believe, along with Cluetrain, that markets are conversations—with customers. Doug told me his main job, as president of the company, was “shopping along with customers.” That’s how he spent most of his time. “We believe in honesty and directness between human beings…We do this by engaging with the whole person, rather than just with the part that ‘consumes….We’ll even open packages with customers to taste and talk about the goods.” As a result, “There’s nothing sold at Trader Joe’s that customers haven’t improved.”Then there’s what Walmart CEO Lee Scott told me in 2000 (at this event) when I asked him “What happened to K-Mart?” From The Intention Economy:
His answer, in a word, was “Coupons.” As Lee explained it, K-Mart overdid it with coupons, which became too big a hunk of their overhead, while also narrowing their customer base toward coupon-clippers. They had other problems, he said, but that was a big one. By contrast, Wal-Mart minimized that kind of thing, focusing instead on promising “everyday low prices,” which was a line of Sam Walton’s from way back. The overhead for that policy rounded to zero.
Which brings me to trust.
We trust Trader Joe’s and Walmart to be what they are. In simple and fundamental ways, they haven’t changed. The ghosts of Joe Coloumbe and Sam Walton still run Trader Joe’s and Walmart. TJ’s is still the “irreverent but affordable” grocery store Joe built for what (in his book) Joe called “the overeducated and underpaid,” and based in Los Angeles. Walmart is still Sam’s five-and-dime from Bentonville, Arkansas. (Lee Scott told me that.)
CVS’s equivalent to Joe and Sam was Ralph Hoagland, a good friend of good friends of ours in Santa Barbara. All of us also shared history around Harvard and Cambridge, where Ralph lived when he co-founded CVS, which stood for Consumer Value Store, in 1963. In those days CVS mostly sold health and beauty products, cheaply. I remember Ralph saying the store’s main virtue was just good prices on good products. Hence the name.
CVS can do a much better job of signaling bargain prices by just making them as low as possible, on the model of Trader Joe’s and Walmart.
I think there is also a good Health position for CVS: one that bridges its health & beauty origins and its eminence as the leading pharmacy chain in the U.S. And it could rest on trust.
I’m thinking now about tech. Specifically, FPCs, for First-Person Credentials. Read what Jamie Smith says about them in his Customer Futures newsletter under the headline The most important credentials you’ve never heard of. Also check out—
What I wrote last year about Identity as Root What DIF is doing What Ayra is doing Other stuff you’ll be hearing about first-person credentials (but isn’t published yet) when you come to the next IIW (April 8-10). What you’ll be learning soon about re-basing everything (meaning every SKU, as well as every person) on a new framework that is far more worthy of trust than any of the separate collections of records, databases, and namespaces that currently divides a digital world that desperately needs unity and interop—especially around health. And::: MyTerms, which is the new name for IEEE P7012, the upcoming standard (for which I am the working group chair) that should become official later this year, though nothing prevents anyone from putting its simple approach to work.MyTerms can be huge and world-changing because it flips around the opt-out consent mechanisms that have been pro forma since industry won the industrial revolution and metastasized in the Digital Age. With MyTerms, the sites and services of the world agree to your terms, not the other way around. With MyTerms, truly trusting relationships can be established between customers and companies. This is why I immodestly call it the most important standard in development today.
So I have five simple recommendations for CVS, all to de-enshittify corporate operations and customer experiences:
Drop the whole loyalty thing. Completely. Cold turkey. Hell, fire the marketing department. Put the savings into employees you incentivize to engage productively (not promotionally) with customers. And publicize the hell out of it. Should be fun. Confine your research to what your human employees learn directly from their human customers. Be the best version of what you are: a great pharmacy/convenience store chain that’s still long in health and beauty products. Simplify the app by eliminating all the promotional shit, and by making it as easy as possible for customers to move prescriptions from one CVS store to another. Watch what’s happening with first-person credentials and MyTerms. Getting on board with those will make CVS a leader, rather than a follower.Coupon-clipping addicts may feel some pain at first, but if you market the new direction well—making clear that you have “everyday low prices” rather than annoying and labor-intensive discounts (many of which expire in three days), customers will come to love you.
Following the launch by Yuno, merchants in Brazil, Argentina, and Chile can replace increasingly vulnerable traditional authentication methods, such as one-time passwords, with Mastercard Payment Passkey Service, which uses device-based biometrics, such as fingerprints and facial recognition already available on smartphones, to authenticate purchases.
Mastercard Payment Passkey Service also leverages tokenisation technology to ensure that sensitive data is never shared with third parties and remains useless to fraudsters in the event of a data breach, making transactions even more secure.
This technology promises to not only boost the security of online transactions, but also to significantly reduce cart abandonment rates by increasing convenience for merchants’ customers.
Changes are on the way for online shopping and e-commerce. The traditional way of paying for items online by typing in your credit card details (card number and CVV security code) will soon be a thing of the past.
Mastercard and other card payment companies will be introducing a one-click button that will work on any online platform.
One of the reasons why services will be moving to a one-click system is to deter hackers who target merchant sites to steal consumer card information. According to a 2023 study by Juniper Research, merchant losses from online payment fraud will exceed $362 billion globally between 2023 to 2028, with losses of $91 billion alone in 2028.
The one-click system will protect consumers and their online data.
We are excited to announce OwnYourData’s involvement in the groundbreaking PACE-DPP project: Promoting Accelerated Circular Economy through Digital Product Passports, spearheaded by Virtual Vehicle. As a key consortium partner, OwnYourData is leading the crucial work focused on Data Intermediaries in the work package 4 DPP Data Ecosystem.
PACE-DPP is motivated by providing guardrails and solution bricks for tackling the basic technological and regulatory challenges for a smooth instantiation of DPP-based Ecosystems. Industrial relevant applications from supply-chains in electronics and wood/pulp/paper industries provide a solid basis for use-case driven experimentation with key enabling digital technologies like Data Spaces and Digital Twins. The essential result will represent the provision of lightweight accessible DPP services for unleashing the hidden potential of innovative circular economy business models, within the context of the European Green Deal.
OwnYourData plays a critical role in the PACE-DPP project by providing essential expertise in self-sovereign identity (SSI) and data governance for the emerging Digital Product Passport (DPP) ecosystem. We contribute to the semantic annotation of data structures to ensure interoperability within the DPP data ecosystem. Additionally, OwnYourData supplies technical building blocks for decentralized identifiers (DIDs), Verifiable Credentials (VCs), and Verifiable Presentations (VPs), supporting attestations and authentication mechanisms in compliance with OpenID specifications (OID4VC, OID4VCI, OID4VP). By implementing these SSI components, OwnYourData enhances trust and security in data sharing processes, fostering a scalable and privacy-preserving digital infrastructure for product traceability.
Beyond its technical contributions, OwnYourData actively works to integrate a neutral data intermediary (https://intermediary.at) that facilitates the secure storage and controlled exchange of product-related information. The intermediary serves as key enablers in supply chain processes, ensuring that product data remains accessible and verifiable while respecting privacy and compliance requirements. In addition we engage with stakeholders and disseminates human-centric aspects of digital product passports through collaborations with initiatives such as MyData, promoting transparency and user empowerment in data ecosystems.
The OwnYourData team in the PACE-DPP project consists of experts in data governance, self-sovereign identity, and semantic technologies. Dr. Christoph Fabianek, an authority in data exchange and SSI frameworks, leads the team’s contributions to decentralized identity solutions and Verifiable Credentials. Dr. Fajar J. Ekaputra brings expertise in semantic web technologies, ensuring structured and interoperable data representation. Gulsen Guler, MSc, specializes in data literacy and human-centric solutions, supporting accessibility and usability of digital product passport implementations. DI(FH) Markus Heimhilcher provides expertise in system operations, database management, and Kubernetes maintenance, ensuring a scalable and secure infrastructure. Paul Feichtenschlager contributes skills in data modeling, statistics, and software development, strengthening the technical foundation of OwnYourData’s role in the project.
We look forward to contributing our expertise to this transformative project and collaborating with other consortium members to establish a secure, interoperable, and privacy-preserving Digital Product Passport ecosystem. Stay tuned for more updates on our journey with PACE-DPP! For more information about the project, visit DPP-Austria.at.
This Lighthouse Project has been made possible by financial contributions from the Austrian Federal Ministry for Climate Action, Environment, Energy, Mobility, Innovation and Technology (BMK), supported by the Austrian Research Promotion Agency (FFG) under grant number 917177, as well as from the German Federal Ministry for Economic Affairs and Climate Action (BMWK), supported by the German Research Promotion Agency (DLR-PT).
Der Beitrag Empowering Digital Product Passports: OwnYourData’s Role in Secure and Interoperable Data Ecosystems erschien zuerst auf www.ownyourdata.eu.
The energy industry is at a turning point. With decentralization on the rise, evolving regulatory demands, and an urgent push for sustainability, industry leaders must innovate together. That’s why Energy Web is excited to launch Energy Web Circles — a new initiative that brings together enterprises, regulators, and technology providers to create software solutions for today’s energy challenges.
A New Model for CollaborationEnergy Web Circles are designed to cut through traditional silos by focusing on key issues faced by the energy sector. Each Circle zeroes in on a specific challenge or opportunity, ranging from decentralized identity management, zero carbon electric vehicle charging, to carbon-aware computing and advanced grid management. These groups offer structured, flexible environments where participants can create solutions, share insights, and help shape emerging industry standards.
Introducing the First Circle: Universal Energy IDOur inaugural Circle, Universal Energy ID, sets the stage for a more secure and interoperable energy ecosystem. As energy markets become increasingly decentralized, the need for a transparent and standardized identity framework grows. The Universal Energy ID isn’t just an identity solution — it’s a trust layer for the decentralized energy economy. It enables secure, seamless, and interoperable transactions that are essential for modern energy systems.
What is the Universal Energy ID Circle?The Universal Energy ID Circle is a collaborative initiative aimed at developing and deploying a decentralized identity and credentialing system for the energy sector. The primary goal is to create a trusted and verifiable identity infrastructure that enables seamless transactions, regulatory compliance, and secure access control across the energy ecosystem. The Circle is designed to foster the development and integration of the EnergyID framework to enable Digital Product Passports (DPPs), e.g. to manage the lifecycle of batteries. For a DPP, this means that information about a product can be stored and updated across a network of various participants (manufacturers, suppliers, distributors, consumers, regulators) without relying on a central entity, making it resistant to censorship and single points of failure. By implementing DPPs, enterprises can drive innovation in electric mobility, enable circular economy practices, and comply with evolving regulatory requirements.
What Does the Solution Look Like?At the core of this initiative is the EnergyID DID method, a W3C-compliant Decentralized Identifier (DID) specifically tailored for energy applications. This method provides a secure, tamper-proof way to authenticate energy-related assets, organizations, and individuals. This framework empowers electric mobility providers, distributed energy resource (DER) operators, and digital product passport managers to build trust and security into their transactions, setting a new standard for the industry.
The solution comprises:
Universal EnergyID Wallet — A digital identity wallet that allows energy stakeholders (such as utilities, DER owners, and EV users) to store and manage their decentralized identifiers and verifiable credentials. Verifiable Credentials (VCs) — These digital attestations enable secure proof of identity, asset ownership, and regulatory compliance in energy transactions. Interoperability Layer — Designed to integrate with existing identity management systems and ensure compatibility with global standards such as eIDAS 2.0. Trust and Governance Framework — A decentralized governance structure ensuring that credential issuance and verification are secure, reliable, and universally accepted.How It Works in Practice
For Electric Vehicle Charging: EV owners can use their Universal EnergyID to authenticate at any charging station, ensuring trusted, frictionless access without relying on siloed authentication systems. For Distributed Energy Resources (DERs): A solar panel or battery storage unit can have a unique, verifiable identity that allows grid operators to authenticate its participation in energy markets. For Energy Data Sovereignty: Consumers can control who has access to their energy data, ensuring privacy and security while enabling seamless energy transactions. For Regulatory Compliance: Businesses and energy providers can automatically prove adherence to compliance standards through digitally signed verifiable credentials.Key Benefits of Universal Energy ID
Unified Identity Management: It provides a single standard for managing identities across energy, electric mobility, and beyond, eliminating fragmentation. Proven Adoption: Major enterprises and OEMs are already adopting this framework, demonstrating real-world traction. Regulatory Alignment: Built to align with standards like eIDAS and Digital Product Passports, it ensures both compliance and scalability. Open and Interoperable: The solution is integrated with leading open-source identity frameworks, including OWF and ACA-Py, making it adaptable to various use cases. Secure Communication: Worker nodes in Energy Web act as DIDComm mediators and relayers, ensuring secure, reliable communication in decentralized identity ecosystems. Automated Billing: The system supports automated and trustworthy billing for energy consumed and produced, reducing administrative frictionJoin the Movement
Energy Web Circles are open to all businesses interested in advancing digitization and decarbonization in the energy sector. Whether you’re an existing Energy Web member or a new partner, your expertise is welcome. By joining a Circle, you will:
If digital identity, verifiable credentials, and seamless interoperability in energy can create value for your business, the Universal Energy ID Circle is for you. Whether you’re an enterprise ready to integrate decentralized identity solutions or a regulator committed to enhancing compliance mechanisms, now is the time to get involved. Or do you have a compelling use case or a challenge that could benefit from industry-wide collaboration?
The future of energy is collaborative, decentralized, and digital. With Energy Web Circles, you have the opportunity to drive that future — today. To learn more about our initiative and discover how you can participate, visit Energy Web or reach out to katy.lohmann@energyweb.org
Join the Universal Energy ID Circle today by contacting: commercial@energyweb.org
Unlocking the Future of Energy with Energy Web Circles was originally published in Energy Web on Medium, where people are continuing the conversation by highlighting and responding to this story.
DIF recently hosted a special session of its Credential Schemas workshop focused on developing privacy-preserving solutions for age verification. Led by Otto Mora, Standards Architect at Privado ID, and Valerio Camiani, Software Engineer at Crossmint, the session explored the growing need for standardized age verification.
The workshop addressed the increasing regulatory requirements for age verification across the US, EU, and other regions. Rather than focusing solely on traditional document-based verification methods, the group discussed innovative approaches including AI-based age estimation and voice recognition technologies, while maintaining strong privacy protections.
A key focus of the discussion was the development of standardized proof of age credential schemas that would enable efficient interoperability between different systems and organizations. Schemas would need to handle a variety of elements like age ranges, verification methods used, and confidence levels aligned with ISO standards.
Getting Involved with DIF
For those interested in contributing to this important work:
The working group meets every other week and welcomes new participants The group’s specifications are publicly available on GitHub, where interested parties can submit commits and issuesTo learn more about DIF and its work on decentralized identity standards, visit https://identity.foundation or reach out to membership@identity.foundation.
I started reading BoiongBoing when it was a ‘zine back in the last millennium. I stopped when I began hitting this:
In fact I don’t block ads. I block tracking, specifically with Privacy Badger, from the EFF.
But BoingBoing, like countless other websites, confuses tracking protection with ad blocking. This is because they are in the surveillance-aimed advertising business, aka adtech.
It’s essential to know that adtech is descended from the junk mail business, euphemistically called “direct response marketing.” As I put it in Separating Advertising’s Wheat and Chaff,
Remember the movie “Invasion of the Body Snatchers?” (Or the remake by the same name?) Same thing here. Madison Avenue fell asleep, direct response marketing ate its brain, and it woke up as an alien replica of itself.
As surveillance-based publications go, BoingBoing is especially bad. Here is a PageXray of BoingBoing.net:
Look at that: 461 adserver requests, 426 tracking requests, and 199 other requests, which BoingBoing is glad to provide. (Pro tip: always strip tracking cruft from URLs that feature a “?” plus lots of alphanumeric jive after the final / of the URL itself. Take out the “?” and everything after it. )
Here is a close-up of one small part of that vast spread of routes down which data about you flows:
Some sites, such as FlightAware, interrupt your experience with a notice that kindly features an X in a corner, so you can make it go away:
Which I do.But BoingBoing doesn’t. Its policy is “Subscribe or pay with lost privacy.” So I go away.
Other sites use cookie notices that give you options such as these from a Disney company (I forget which):
Nice that you can Reject All. Which I do.
This one from imgur let’s you “manage” your “options.” Those, if they are kept anywhere (you can’t tell), are in some place you can’t reach or use to see what your setting was, or if they haven’t violated your privacy:
This one at Claude defaults to no tracking for marketing purposes (analytics and marketing switches are set to Off):TED here also lets you Accept All or Reject All:
I’ve noticed that Reject All tends to be a much more prominent option lately. This makes me think a lot of these sites should be ready for IEEE P7012, nicknamed MyTerms, which we expect to become a working standard sometime this year. (I chair the working group.) I believe MyTerms is the most important standard in development today because it gets rid of this shit—at least for sites that respect the Reject All signal, plus the millions (perhaps billions?) of sites that don’t participate in the surveillance economy.
With MyTerms, sites and services agree to your terms—not the other way around. And it’s a contract. Also, both sides record the agreement, so either can audit compliance later.
Your agent (typically your browser, through an extension or a header) will choose to proffer one of a small list of contractual agreements maintained by a disinterested nonprofit. Customer Commons was created for this purpose (as a spin-off of ProjectVRM). It will be for your terms what Creative Commons is for your copyright licenses.
Customer Commons also welcomes help standing up the system—and, of course, getting it funded. If you’re interested in working on either or both, talk to me. I’m first name at last name dot com. Thanks!
The post Dr. Meagan Treadway joins Velocity’s board appeared first on Velocity.
The DIF Hackathon 2024 brought together builders from around the world to tackle some of the biggest challenges in decentralized identity. Across multiple tracks—including education and workforce solutions, reusable identity, and privacy-preserving authentication—participants developed creative applications that redefine how digital identity is used and trusted in the real world.
One of the standout challenges? The ZKP in SSI track, sponsored by Privacy & Scaling Explorations (PSE) from the Ethereum Foundation. Teams explored how to innovate with Zero-Knowledge Proofs (ZKPs), Multi-Party Computation (MPC), and Fully Homomorphic Encryption (FHE)—leveraging programmable cryptography to enhance privacy, security, and interoperability in SSI systems.
Beyond ZKPs, this year’s hackathon saw verifiable credentials powering next-gen job boards, seamless hotel check-ins, streamlined digital identity solutions for expats, and groundbreaking innovations in decentralized file storage.
Check out the inspiring discussion featuring the hackathon winners which took place on Dec 19th on Spaces: https://x.com/DecentralizedID/status/1869819527170797972
Let’s dive into the full list of hackathon winners and the impactful projects that emerged from DIF Hackathon 2024. 👇
Future of Education and the Workforce TrackSponsored by Jobs for the Future (JFF) and the Digital Credentials Consortium (DCC)
1st Place: Challenge 1
VCV
VCV revolutionizes the CV, It allows for you to create verifiable and clean CVs where an employer can upload them and instantly verify them and view their individual verifiable credentials which combine to make the CV.
VCV VCV revolutionizes the CV, making it verifiable and creating verifiable experiences. DevpostMerul Dhiman2nd Place: Challenge 1
VeriDID Futures Credential Job Board
A job board that matches job seekers and potential employers using verifiable learning and employment record credentials.
https://devpost.com/software/veridid-futures
1st Place: Challenge 2b
Crediview Job Board
Instantly view and verify credentials with our sleek browser extension. Drag, drop, or paste – our smart detector does the rest. Streamline workflow and boost confidence in credential verification.
CrediView Instantly view and verify credentials with our sleek browser extension. Drag, drop, or paste – our smart detector does the rest. Streamline workflow and boost confidence in credential verification. DevpostBolaji Mubarak ZKP in SSI Track Sponsored by the Ethereum Foundation Privacy Scaling Explorations (PSE)1st Place
Decentralised Credentials Issuer
Decentralising credentials issuance with Multiparty computation based threshold signatures
Decentralised Credentials Issuer Decentralising credentials issuance with Multiparty computation based threshold signatures Devpostanishsapkota Sapkota2nd Place
VC Notary
Converting web service account data into verifiable credentials using TLSNotary
https://difhackathon2024.devpost.com/submissions/570986-vc-notary
3rd Place
ZK Firma Digital
The project aims to create a zero-knowledge proof infrastructure solution for enhancing Costa Rica's digital identity system. It's a protocol that allows you to prove identity in a privacy preserving way.
https://difhackathon2024.devpost.com/submissions/559470-zk-firma-digital
Reusable Identity Track PinataPinata Best Overall/ Pinata Proof of Personhood Credentials Winner
Vouch This Art
Vouch This Art is a Chrome extension that allows users to 'vouch' for images on the web by liking them or leaving messages, enabling interaction with images hosted anywhere online.
Vouch This Art Vouch This Art is a web chrome extension that “Vouch” images, allowing users to interact with images hosted anywhere on the web, whether by simply liking them or leaving messages. DevpostTotoro GendutPinata Verifiable File Storage Winner
PinaVault
Access private IPFS files securely with PinaVault! Leverage Pinata's FilesAPI and W3C credentials to manage, share, and access files within organizations Credential-based access ensures user privacy.
https://difhackathon2024.devpost.com/submissions/574219-pinavault/
Pinata Identity-Based Access Controls For Private Files Winner
Expatriate
Streamlining the complex process of settling as an expat in Amsterdam through DIF verifiable credentials and wallets.
https://difhackathon2024.devpost.com/submissions/575933-expatriate
Pinata Honorable Mentions
ChainVid
ChainVid is a decentralized platfrom to store and share videos online.
https://difhackathon2024.devpost.com/submissions/570136-chainvid
LookMate – Your virtual fashion companion.
Try Before You Buy: Revolutionizing Online Shopping with Realistic Virtual Fitting Rooms.
https://difhackathon2024.devpost.com/submissions/577364-lookmate-your-virtual-fashion-companion
PatentBots.AI - Securing Inventor Rights w/ Pinata Identity
PatentBots.AI decentralized system designed to help SME's to fully capitalize on, defend, commercialize and monetize their parents using the power memes and credentialed of persona identities (AIs).
Truvity1st Place: Challenge 1 & Challenge 2
Miko's Journey
A comprehensive digital identity solution that transforms the complex expat documentation process into a streamlined, secure journey.
Devpost2nd Place: Challenge 1
Expatriate
Streamlining the complex process of settling as an expat in Amsterdam through DIF verifiable credentials and wallets.
https://difhackathon2024.devpost.com/submissions/575933-expatriate
3rd Place: Challenge 1
CredEase
Digital Identity Wallet with a guided to-do list that helps expats collect, link, and submit VCs for tasks like employment, visa application, municipal registration, bank account opening, and housing.
https://difhackathon2024.devpost.com/submissions/577563-credease/judging
ArcBlock1st Place
BM9000
A blockchain-powered beat maker by Bass Money Productions. Log in, get instant sounds, make patterns, build songs, and upload your own sounds—securely stored in your DID space.
BM9000 BM-9000: A blockchain-powered beat maker by Bass Money Productions. Log in, get instant sounds, make patterns, build songs, and upload your own sounds—securely stored in your DID space. Devpost12 inch2nd Place
didmail
Decentralized encrypted mail Based on ArcBlock DID/NFT technology
https://devpost.com/software/didmail
3rd Place
Todai
Enables seamless registration and authentication without usernames, passwords, or third-party oversight, using blockchain and advanced digital identity solutions.
https://devpost.com/software/todai
ArcBlock Honorable Mentions
Titan Care
TitanCare leverages multi-agent AI to enhance data security, privacy, and user control, addressing real-world health sector challenges with innovative solutions.
https://devpost.com/software/titancare
FlexiLeave
This solution uses ArcBlock's Blocklet SDK to manage employee portable leave with DIDs and VCs via DID Wallet, enabling identity creation, credential issuance, verification, and file integration with Pinata.
https://difhackathon2024.devpost.com/submissions/577693-flexileave
TBD (Block)1st Place
DIF TBD KCC
Leverages Decentralized Identifiers (DIDs) and Verifiable Credentials (VCs) to create a secure and privacy-preserving solution for Known Customer Credentials (KCC).
DIF TBD KCC Level Up Digital Identity with TBD DWN DevpostAditya Birangal2nd Place
KCC TBD Hackathon
https://difhackathon2024.devpost.com/submissions/577640-kcc-tbd-hackathon
https://difhackathon2024.devpost.com/submissions/577640-kcc-tbd-hackathon
3rd Place
kcc-tbdex
A simple backend application written in javascript that records VC JWT in Alice's DWN and gets the record Id.
https://difhackathon2024.devpost.com/submissions/567335-kcc-tbdex
Ontology1st Place
CrediLink connect
CrediLink Connect, a browser extension built with indy-bex-connector, uses Verifiable Credentials and DIDs to combat LinkedIn scams by enabling wallet creation, VC management, and proof verification.
CrediLink connect Credilink connect is a browser extension for receiving verifying and proving verifiable credentials DevpostEren Akyıldız2nd Place
Enhanced Privacy VC Wallet: Implementing Issuer Hiding
A VC Wallet app enabling users to present VPs with enhanced privacy through Issuer Hiding, concealing the VC issuer's identity.
3rd Place
Blockchain Whatsup Application
Seamlessly secure, blockchain-based chat for the decentralized future.
https://difhackathon2024.devpost.com/submissions/577550-blockchain-whatsup-application
Crossmint1st Place
Idenfy
IDENFY is a reusable identity solution leveraging biometrics and verifiable digital credentials to streamline secure eKYC across sectors with simplicity and speed.
Idenfy Empowering Trust with Reusable Identity and Seamless eKYC. DevpostSai Ranjit Tummalapalli Anonyome Labs1st Place
Decentralized PHC Credentials in the Future AI Era.
Providing a way for a person to prove their human identity in the future using PHC personhood credentials with VC, DID, and ZKP, supported by powerful infrastructure.
Decentralized PHC Credentials in the Future AI Era. Providing a way for a person to prove their human identity in the future using PHC personhood credentials with VC, DID, and ZKP, supported by powerful infrastructure. DevpostData Fusion2nd Place
VAI
VAI is an AI chatbot that brings transparency and accountability to large language model interactions by issuing VCs that show the provenance of the inputs, outputs, and the specific model being used.
https://difhackathon2024.devpost.com/submissions/575691-vai
3rd Place
VerifiEd
VerifiEd uses self-sovereign identity (SSI) to provide an interactive, module-based learning platform where users earn Verifiable Credentials (VCs) while mastering SSI principles.
https://difhackathon2024.devpost.com/submissions/562649-verified
Cheqd1st Place
VAIVAI VAI is an AI chatbot that brings transparency and accountability to large language model interactions by issuing VCs that show the provenance of the inputs, outputs, and the specific model being used. DevpostBrian Richter Extrimian
1st Place
The Grand Aviary Hotel
The Grand Aviary Hotel offers a seamless stay with digital room access via Verifiable Credentials. Guests unlock rooms with a secure mobile credential, enhancing convenience, and security.
https://devpost.com/software/the-grand-aviary-hotel
NetSys - Hospitality and Travel1st Place
Journease
Elevate travel with leading digital travel companion, unlocking full potential of portable identity and data, by pulling diverse capabilities together to deliver end-2-end seamless travel experiences
Journease Elevate travel with leading digital travel companion, unlocking full potential of portable identity and data, by pulling diverse capabilities together to deliver end-2-end seamless travel experiences DevpostAndrea Sanchez Fresneda2nd Place
Personalised Dining Offers Using AI & Travel Preferences
Imagine booking a hotel with on-site restaurant & receiving a personalised dining offer based on what you eat. If you want a burger or vegetarian, you got it! Better for consumers, better for venues
3rd Place
OwnID
IOwnID is an offline-first app leveraging decentralized identity (DID) and Bluetooth to enable secure, private, and internet-free digital identity management and communication.
https://difhackathon2024.devpost.com/submissions/574224-ownid
We have some big news to share! Our parent company, 3Box Labs, has merged with Textile. With this, Ceramic is joining the Textile family alongside their other decentralized data solutions, Tableland DB and Basin Network.
Since Ceramic’s inception, the crypto industry has changed dramatically, and with it, the types of applications developers are building with decentralized and composable data. As AI agents reshape our digital landscape and become the primary producers and consumers of data, they bring needs very familiar to the Ceramic community: decentralized storage for knowledge and memory, open composability for sharing between agents, and streaming for real-time publishing and subscriptions.
Going forward, we have a vision for Ceramic as a foundational component of something much bigger than itself: an open intelligence network where AI agents can autonomously buy and sell intelligence from each other on-demand. Agents can plug-in to supercharge their own knowledge and capabilities, delegate tasks to agents who specialize in that skill, or publish and monetize their own expertise — all onchain. For this new network, Ceramic will play a vital role in empowering agent-to-agent communication and knowledge streaming alongside other storage technologies built by Textile.
Ceramic will continue operating with no disruption to current development or customers, but now you’ll have the added benefit of being connected to a whole new network of agents willing and able to pay for your datasets.
Read our announcement on X for more details and be sure to follow @textileio for future updates.
1/10: Big News! Our parent company @3BoxLabs has merged with @textileio. With this, Ceramic is becoming part of the Textile family.
— Ceramic (@ceramicnetwork) February 5, 2025
Ceramic will continue operating, but now as part of something much bigger: An Intelligence Network for AI Agents 🤖https://t.co/boCvwjwnIF
Thank you for being a part of our journey. Much more to come.
Customers have endless options at their fingertips. If they can’t find what they need in your store, they’ll simply go elsewhere to shop. But this isn’t just a retailer’s problem, it’s a challenge that impacts the entire supply chain.
In this episode, industry expert Mike Graen joins hosts Reid Jackson and Liz Sertl to break down the critical importance of on-shelf availability. Mike shares why ensuring products are accessible to customers is more essential than ever. He also shares how RFID, AI-driven algorithms, and robotics are transforming inventory accuracy, alongside actionable strategies to keep shelves stocked and customers satisfied.
In this episode, you’ll learn:
The difference between "in-stock" and "on-shelf availability"
How technology is solving inventory challenges and boosting sales
The evolving nature of customer loyalty and how to keep up
Jump into the conversation:
(00:00) Introducing Next Level Supply Chain
(03:13) The importance of on-shelf product availability (OSA)
(05:21) Why retailers are losing customers
(07:41) Challenges with inventory management
(14:15) The different ways customers shop
(18:52) Getting serious about measuring OSA
(22:47) Computer vision and RFID to track OSA
(28:35) GS1 standards in the supply chain
(32:52) Evolving together with technology
(35:13) Mike’s favorite tech
Connect with GS1 US:
Our website - www.gs1us.org
Connect with the guest:
Mike Graen on LinkedIn
The post Dr. Deborah Everhart joins Velocity’s board appeared first on Velocity.
The post Transforming Healthcare Credentialing: An Update from Velocity Network Foundation appeared first on Velocity.
On January 31, Christopher Allen spoke with Mathieu Glaude at the SSI Orbit Podcast on the controversial topic “Has Our SSI Ecosystem Become Morally Bankrupt?”. Following the video link, below, are Christopher’s Musings on the topic, overviewing some of the material from the video.
The legitimacy of the modern self-sovereign identity (SSI) industry is a vital question to ask, because in the last year I’ve become to wonder if DIDs and VCs could actually lose. In my opinion, the main threat is that we made compromises in the development of these new self-sovereign technologies that led to them not being differentiated from centralized identity. We betrayed our principles.
But that was actually a pretty small portion of a wide-ranging discussion of identity between myself and Mathieu that I hope you’ll listen to. It also covered my history, the history of SSI, the ideals of SSI, how we’re losing them, and where that could lead us. (Spoiler: it’s not a good place.)
“We’ve seen what happens when you have too much control in a single authority … and we’re forgetting those lessons. We have to minimize the risks when [authorities] are wrong or their powers are abused.”
My own history in the cryptography & identity field dates back to meeting the founders of the Xanada Project, working with RSA, writing SSLRef, coauthoring TLS, and working with elliptic curves. This history all offers ideas for digital identity! For example, Xanadu was one of the inspirations for my ideas of edge identifiers & cryptographic cliques that I recently released.
I also talked about how self-sovereign identity, DIDs, and VCs all matured at Rebooting the Web of Trust. Early on, we worked with experts from the UN and people working with the refuge crisis. We had all the right ingredients! But it’s so easy to lose the ideals behind a technology. Groupware lost its collaborative processes. TLS became centralized. We had ten principles for self-sovereign identity, but they’ve been somewhat lost.
“Somehow we disconnect our words from where the real power is, which is anti-coercion and anti-violence.”
I always intended those principles to be a starting point. I was surprised when folks at RWOT weren’t interested in reviewing and revising them. Now, that disinterest has resulted in SSI technology that’s losing against MDLs and other centralized tech. Part of the problem is that privacy doesn’t seem important to people. I think it’s a vocabulary disconnect. Privacy allows us to avoid violence and coercion. Maybe those are the words we need to concentrate on!
Unfortunately, we’ve seen what happens if we walk this path. Jacobus Lentz’s enthusiastic identity work leading up to WWII led to genocide in the Netherlands. That’s what happens when identity is centralized.
We need to talk more about all of these aspects of identity, so that the digital identity of the 21st century protects our basic human rights. I hope you’ll join me in discussion here or in the Youtube comments.
“Articulate your values, your own principles, and begin to evaluate your work, your priorities through those lenses. You may find that a very small difference in the choices you make could have big impact because they were informed by your values.”
If you prefer text articles, I’ve linked to a number of my articles on these topics below:
Echoes from History Echoes from History II Has our SSI Ecosystem Become Morally Bankrupt? How My Values Inform Design The Origins of Self-Sovereign Identity The Path to Self-Sovereign Identity Edge Identifiers & Cliques Open & Fuzzy CliquesWhy You Should Consider the FDO Standard for Zero-Trust Device Onboarding
1. Executive SummaryIoT and edge computing solutions are exploding as manufacturers are looking for new ways to modernize their operations and accelerate production. By 2025, over 75 billion IoT devices will be connected globally. The industrial IoT market, which spans industries like manufacturing, healthcare, and retail is valued at USD 194 billion in 2024 and is projected to reach USD 286 billion by 2029. This surge unlocks immense opportunities and innovation for businesses and manufacturers alike. However, keeping up with the pace of demand for these devices and deploying them sustainably has created unprecedented challenges.
Namely, two key factors have the potential to derail the edge revolution entirely:
Costly and inefficient installation processes Security vulnerabilities Download the FDO Guide 2. What is Onboarding?When an edge node or IoT device is installed in a facility, the device must be onboarded to its management platform hosted in the building or in the cloud.
The Onboarding ChallengeDevice onboarding at scale is expensive and can introduce significant risks if not done properly. Meanwhile, typical manual onboarding processes and default passwords have created severe vulnerabilities, with 57% of IoT devices vulnerable to medium or high-severity attacks.
What is FIDO Device Onboarding (FDO)?Nearly half (48%) of critical infrastructure security leaders reported experiencing at least one major security impact due to a compromised device within the last year.
FIDO Device Onboard (FDO) is a revolutionary standard designed to simplify, secure, and automate the onboarding process for IoT and edge devices. FDO simplifies device onboarding in edge and IoT computing environments with a plug and play, zero trust approach embedded in the specification. Developed by industry leaders like Arm, Amazon, Google, Intel, Microsoft and Qualcomm, the specification is one of the first openly available standards designed specifically to solve edge and IoT onboarding challenges: time-intensive, complex manual processes, high costs, and security vulnerabilities. It is targeted at industrial, medical, automotive, IT and retail use cases and is complemented by an independent certification program.
3. The Overlooked Opportunity and RiskWith the introduction of AI, a new layer of complexity was added to the edge challenge. Organizations are now hyper-focused on AI adoption and its promise of smarter, faster, and more efficient operations, but without addressing foundational IoT security, these ambitions are at risk of being undermined.
FIDO Alliance Device Onboard (FDO) provides the answer, offering a zero trust, plug and play standard that accelerates deployments while safeguarding infrastructure. In today’s challenging economic climate, automating zero-touch device onboarding enables leaders to deliver ambitious digital transformation projects with limited resources and budgets, saving installation costs, accelerating time-to-value, and improving security. FDO is an open standard that allows users to innovate. FDO’s zero-trust approach is an important piece of the IoT security puzzle and sets the stage for future AI updates inside protected enclaves.
Which industries benefit from FDO? IndustriesAutomotiveHealthcareChemicalManufacturingConsumer goods manufacturingOil and GasEnergyRetailEducationSupply chain and logisticsEnterprise and NetworkingTelecommunications What Device Types Can Be Enabled with FDO? Device TypesExamplesIoT sensors and devicesTemperature sensorsPressure sensors Motion detectorsWater quality monitorsSmart thermostatsConnected industrial equipmentSmart camerasWi-Fi-enabled camerasJust as passkeys revolutionized user authentication, FDO is transforming device onboarding in edge computing and IoT environments.
Key Features of FDOThe key features of FDO include:
Late binding: Late binding saves money and time as FDO-enabled devices can be onboarded to any platform without the need for unique customization. This reduces the number of device SKUs needed versus other onboarding solutions. It ensures devices are authenticated and provisioned properly for the device recipient after ownership is verified. Plug and Play: Whereas manual onboarding requires expensive, skilled technicians, FDO is highly automated, often allowing semi-skilled staff to carry out the installation. This is important in markets such as retail where FDO will allow the store manager to do the installation rather than needing to bring in an expensive IT expert. Ownership voucher: Device ownership is established and transferred securely in the supply chain with the “ownership voucher,” which uses cryptographic authentication protocols in the FDO specification to verify the device recipient’s physical and digital ownership. Zero-touch and zero trust: Combined, these attributes establish a zero trust approach that covers end-to-end device onboarding using embedded, cryptographic protocols, and sequential processes to perform initial onboarding actions securely and quickly. The zero trust strategy covers both the device and the management platform during the onboarding process. FDO for AI and Additional FeaturesFDO is designed to permit a secure subsystem to onboard independently and securely from the rest of the system. This makes FDO an excellent candidate for updating AI models deployed in edge secure enclaves from a cloud repository.
Additional features include:
Interoperability with OPC Unified Architecture (OPC UA) Wi-Fi ready Flexible configurations for cloud, multi-cloud, and closed network environments with multi-tenant and cloud servers Multiple open source implementation methods available 3. FDO Certified ProductsThe FIDO Alliance is an open industry association with a mission to reduce the world’s reliance on passwords. Consisting of the biggest global tech organizations and experts in cybersecurity, identity, and authentication, the alliance has a proven track record in transforming consumer authentication with passkeys.
In two years since the initial launch, passkeys have been enabled on 20% of the world’s top 100 websites and over 15 billion accounts.
The FIDO Alliance has launched this complementary independent certification program that brings additional value to end users and solution providers alike. It assures that FDO certified solutions meet all the specifications, that devices comply with all security requirements, and have been tested for interoperability with other products.
FDO Certified products bring considerable additional value to end users by offering:
Guaranteed interoperability and security assurance Faster deployments and time to value Greater efficiencies Assures security and interoperability, eliminating the need for time-consuming vendor bake-offs with uncertified or homebrewed onboarding solutionsNow FIDO is applying this expertise to improve device authentication in industrial IoT and edge computing environments. FDO ensures devices and edge nodes can quickly and securely authenticate and connect online during initial deployment.
4. On the Edge: The Urgency to Secure and Simplify Device SecurityOperational bottlenecks are a significant challenge in both industrial and commercial sectors. Manual, unsecured device onboarding not only consumes time and resources but also increases the risk of breaches. According to Microsoft’s recent white paper, How to Scale Intelligent Factory Initiatives Through an Adaptive Cloud Approach, today’s manufacturing leaders are burdened with “technical sprawl and inefficiencies that create major obstacles to being able to scale solutions – including AI – to multiple production lines and geographically dispersed factories.”
This technical sprawl has led to data silos and management complexities, hindering global visibility and scalability. Ultimately, this prohibits the promise of connected devices from being realized in any industry.
The average cost of a data breach in 2023 was $4.88 million (USD).
Edge implementations involve a lot of risk. Often these edge nodes are used in remote, precarious, and high-risk environments. Industries like healthcare, energy, and manufacturing face unique challenges and regulations, such as vulnerable patient monitoring systems, hazardous environments, and risks to complex supply chains. To make matters more complex, new threats are constantly emerging, such as the rise of quantum computing and zero-day exploits.
Some companies may feel that they can develop their own proprietary onboarding solution, but given today’s economic pressures and the growing threat landscape, businesses often simply cannot afford to develop and maintain proprietary solutions or risk a preventable breach.
FDO and AI: A Symbiotic FutureEdge and IoT are also the “eyes and ears” of AI, collecting and transmitting data for analysis. There is a huge risk in overlooking IoT security and threats such as data poisoning, which can cripple AI models reliant on real-time data. Securing the foundation of edge and IoT is essential to unlock the full potential of AI.
AI systems depend on clean, reliable data streams. A compromised IoT device does not just threaten the device itself – it can corrupt AI models, disrupt decision-making, and open doors to adversarial attacks. FDO’s zero trust onboarding ensures these vulnerabilities are eliminated from the start.
5. What Problems Does FDO Solve? Human error: 34% of data breaches involve human error – FDO minimizes this with automation and a zero-touch approach. Time-intensive and inefficient deployments: FDO can deploy 10 times faster than manual methods. It dramatically reduces the time and budget needed to hire skilled technicians in high-risk environments, like oil rigs and factories. In some applications, such as retail, existing on-site staff can install FDO as it is plug and play technology. Market speed to innovation: Open standards help advance innovation and level the competitive playing field. By standardizing processes, providers can focus on truly adding value to their solutions. For customers, they can benefit from better solutions that are faster to deploy and more secure. Device Security Risks – The Supply Chain LifecycleStage 1: Manufacturing
Risk: Supply chain compromises (i.e., tampered devices)
FDO: Establishes cryptographic ownership during manufacturing, ensuring device integrity
Stage 2: Shipment and storage
Risk: Device ownership asset mismanagement
FDO: Tracks and secures ownership transfers, maintaining a secure chain of custody
Stage 3: Onboarding and deployment
Risk: Exposures from default passwords and manual installation errors
FDO: Eliminates passwords and human errors with plug and play device onboarding and zero-touch automation
Stage 4: Operations
Risk: Insecure data transmission, spoofing and infiltration
FDO: Encrypts data exchanges and ensures ongoing device authentication
Standards are vital to unlocking the full potential of any major global technology innovation. Global industry standard initiatives help remove huge amounts of waste, advance technology far more quickly, and increase market competitiveness. Standards also provide long-term security. As threats evolve, experts in the field continue to evolve the standards to keep up.
The FDO standards have been developed and backed by the best companies in the industry, including Microsoft, Dell, and Intel. Experts from these organizations proactively work together to develop use cases and best practices for seamless and secure IoT device authentication, provisioning, and also support the adoption and implementation of the FDO standard.
The FDO standard is also continuously improved within the FIDO Alliance. In the last two years, several Application Notes have been released to deal with implementation and other areas related to FDO 1.1. The newest version of the standard, FDO 1.2, is currently in development with new enterprise-ready features and is expected to be released in 2025.
Benefits for enterprises:
Protect devices and supply chains with zero trust security. Integration is flexible with existing systems. Reduce the need to develop and manage your own testing requirements and protocols – buy with confidence with FDO certified products. Reduce time to market/deployment and increase value.Benefits for providers:
Leverage FDO certification as a competitive advantage. Ensure compatibility and earn customer trust with external independent validation. This becomes increasingly valuable as market adoption rises and FDO is increasingly referenced in Requests for Proposals (RFPs). Realize hardware efficiencies, simplify production, and reduce waste. As with FDO, operating systems can be deployed on-site and do not need to be hard programmed in. This capability is now part of an active workstream within the FDO Working Group called “Bare Metal Onboarding”. Fast-track solution development with confidence. Free engineer time to focus on higher value projects rather than waste time with manual or proprietary onboarding solutions. Offer a faster, more efficient solution to customers.7. How to Adopt FDO Today“Deploying FDO has marked a pivotal shift for ASRock Industrial, establishing a new benchmark in secure, scalable onboarding for industrial edge IoT solutions. FDO’s advanced security framework enables us to deliver unparalleled reliability and adaptability, empowering our clients to scale confidently in increasingly complex environments. This deployment cements ASRock Industrial’s leadership in industrial computing security and sets the stage for us to shape the future of Industry 4.0 with solutions that are both resilient and future-ready” – Kenny Chang, Vice President, ASRock Industries
FDO offers a simple, secure, and scalable solution for enterprises and providers to accelerate edge computing and IoT device deployment at scale. With proven benefits like streamlined procurement, reduced costs, and enhanced security, FDO offers a clear path to efficiency and innovation – even in complex, high-risk, distributed environments.
Now is a perfect time to join industry leaders like Microsoft, Dell, Red Hat, and Intel in backing FDO and paving the way for wider adoption.
There are several ways to get involved with FDO with the FIDO Alliance:
Explore: Discover FIDOⓇ Certified FDO products for seamless device onboarding. Get certified: Learn how to get FDO certified and demonstrate your products meet global security and interoperability standards. Join the FIDO Alliance: Become a FIDO Alliance member and help shape the future of the FDO standard.
The technology, resources, and support are in place for FDO to transform the way leaders and teams deploy IoT devices at scale while managing edge security risks in today’s fast-paced economy.
The adoption of passkeys in digital authentication has shown significant growth throughout 2024, building on the momentum started by major platform providers in their shift away from traditional passwords. While falling short of earlier predictions of 15 billion accounts, password management provider Bitwarden reported a 550 percent increase in daily passkey creation in December 2024 compared to the previous year, with approximately 1.1 million passkeys created in Q4 alone.
Industry-wide implementation of passkeys has expanded substantially, with PasskeyIndex.io documenting an increase from 58 to 115 services supporting passkeys during 2024. The growth follows significant developments in the FIDO2 authentication landscape, including new research revealing both strengths and potential vulnerabilities of synced passkeys. Bitwarden’s reach now extends to more than 180 countries and 50 languages, serving over 50,000 business customers globally.
The post Velocity Network Foundation – New Year Updates appeared first on Velocity.
DIF is excited to announce the appointment of Dr. Carsten Stöcker as DIF Ambassador. As founder and CEO of Spherity GmbH, Dr. Stöcker brings pioneering experience in implementing decentralized identity across industrial ecosystems. His expertise in developing Verifiable Digital Product Passports for regulated industries and leadership in bridging European Digital Identity initiatives with Industry 4.0 applications makes him uniquely qualified to advance DIF's mission.
A physicist by training with a Ph.D. from the University of Aachen, he has served as a Council Member of Global Future Network for the World Economic Forum and Chairman of IDunion SCE. Through his work at Spherity, he has pioneered secure identity solutions across enterprises, machines, products, and even algorithms, with particular focus on highly regulated technical sectors requiring stringent compliance processes.
Vision and Focus AreasAs DIF Ambassador, Dr. Stöcker will help strengthen DIF's mission of developing secure, interoperable standards for privacy-preserving identity ecosystems, fortifying DIF’s role as a hub for innovation. His work will focus on digital identity convergence, bridging EUDI Wallets with DIF standards and Industry 4.0 to create seamless, secure, and compliant identity solutions. This includes establishing decentralized identity as the backbone of industrial ecosystems, supporting automation, trust, and circular economy initiatives.
A key focus will be driving adoption of Verifiable Digital Product Passports in regulated industries such as pharmaceuticals, batteries, and automotive, enabling robust traceability, compliance, and sustainability verification. Dr. Stöcker will also work on ensuring interoperability between decentralized identity standards and European Digital Identity Wallets for cross-border organizational applications.
Industry Impact and Future Direction"Decentralized identity is instrumental for building trust in global supply chains, regulatory compliance, and enabling the future of Industry 4.0," says Dr. Stöcker. His leadership will strengthen DIF's mission to make decentralized identity the foundation for a more secure and interoperable digital world.
Join us in welcoming Dr. Stöcker as DIF Ambassador. Subscribe to our blog to stay updated on Dr. Stöcker's work advancing decentralized identity standards for Industry 4.0 and digital product passports. For more information about DIF initiatives and getting involved, visit our website.
February 2025
DIF Website | DIF Mailing Lists | Meeting Recording Archive
Table of contents Decentralized Identity Foundation News; 2. Working Group Updates; 3 Special Interest Group Updates; 4 User Group Updates; 5. Announcements; 6. Community Events; 7. DIF Member Spotlights; 8. Get involved! Join DIF 🚀 Decentralized Identity Foundation NewsDIF's working groups and forums kicked off January with a flurry of activity, including:
Proof of Age Special Workshop on January 28th, featured in Biometric Update [Read the article] DIF Labs prepares for its first cohort Show-and-Tell DIDComm User Group added European and APAC-friendly meeting timesDetails for these and more to follow in soon-to-be-released posts, so stay turned!
🛠️ Working Group Updates DID Methods Working GroupThe DID Methods Working Group is making progress towards establishing selection criteria and accepting proposals for DID methods. They are evaluating mechanisms for measuring decentralization of methods, and recently evaluated the potential of self-certifying identifier methods.
DID Methods meets bi-weekly at 9am PT/ noon ET/ 6pm CET Wednesdays
Identifiers and Discovery Working GroupThe Identifiers and Discovery Working Group, along with its subgroups focusing on DID Traits and did:webvh, continues to make substantial progress towards specification readiness. They reviewed the DID Web VH specification and implementation, explored the use of DIDs in IoT devices and enterprise communities, and worked on verification processes. The DID Traits team focused on finalizing specifications for their 1.0 release and addressed government-approved cryptography standards.
Identifiers and Discovery meets bi-weekly at 11am PT/ 2pm ET/ 8pm CET Mondays
🪪 Claims & Credentials Working GroupThe Credential Schemas Work Item held a special workshop on Proof of Age on January 28th, where the team invited the public to explore standardization of age verification schemas and discussed privacy-preserving solutions. A special report-out will follow.
Credential Schemas work item meets bi-weekly at 10am PT/ 1pm ET/ 7pm CET Tuesdays
Applied Crypto Working GroupThe general Applied Crypto Working Group restarted at the end of January to focus on developing a trust model for ZKP self-attestations. After an initial evaluation of Anon Aadhaar, they're ready to work on a general framework that can be applied across different implementations
The Crypto BBS+ Work Item group maintained steady progress throughout January, with weekly meetings focusing on blind signatures and pseudonyms. The team worked on refining API designs and explored the potential of post-quantum privacy, while also addressing the need for test vectors and updates to working group specifications.
BBS+ work item meets weekly at 11am PT/ 2pm ET/ 8pm CET Mondays
Applied Crypto Working Group meets bi-weekly at 7am PT/ 10am ET/ 4pm CET Thursdays
The initial Labs cohort demonstrated substantial progress, with the recent meeting featuring mentor review in preparation for the February show and tell session. More details to follow.
DIF Labs meets on the 3rd Tuesday of each month at 8am PT/ 11am ET/ 5pm CET
DIDComm Working GroupThe DIDComm Working Group is considering moving its usual meeting tome to accommodate EU particiants. They are discussing collaboration with the Trust Spanning Protocol (TSP).
DIDComm Working Group meets the first Monday of each month noon PT/ 3pm ET/ 9pm CET
If you are interested in participating in any of the Working Groups highlighted above, or any of DIF's other Working Groups, please click join DIF.
🌎 DIF Special Interest Group UpdatesThe team advanced work on a standardized travel profile schema, focusing on multilingual support and international data handling requirements. A major highlight was the January 30th session featuring presentations from SITA and Indicio, who demonstrated successful implementation of verifiable credentials in travel, including a pilot program in Aruba.
Key developments included:
Progress on JSON schema development for standardized travel profiles Advancement of multilingual and localization capabilities Refinement of terminology and glossary for industry standardization Demo of successful verifiable credentials implementation in live travel environmentMeetings take place weekly on Thursdays at 10am EST. Click here for more details
DIF China SIGThe China SIG is growing to a vibrant community, with over 140 people in the discussion group. In 2024 they organized 9 online meetings and invited different DID experts for discussions, including experts from GLEIF, DIF, and TrustOverIP.
Click here for more details
APAC/ASEAN Discussion GroupThe DIF APAC call takes place Monthly on the 4th Thursday of the month. Please see the DIF calendar for updated timing.
DIF Africa SIGMeetings take place Monthly on the 3rd Wednesday at 1pm SAST. Click here for more details
DIF Japan SIGMeetings take place on the last Friday of each month 8am JST. Click here for more details
📖 DIF User Group UpdatesThe DIDComm User Group established additional meeting times to accommodate global participation. They worked on expanding their reach and planned engagement with Trust Spanning Protocol representatives, while also focusing on improving documentation and accessibility.
There are two meeting series to accommodate different time zones, each taking place every Monday except the first week of the month (which is reserved for DIDComm Working Group). Click here for more details.
Veramo User GroupMeetings take place weekly on Thursdays, alternating between Noon EST / 18.00 CET and 09.00 EST / 15.00 CET. Click here for more details
📢 Announcements at DIFConference season is kicking into high gear. Explore our Events calendar to meet the DIF community at leading Decentralized Identity, Identity, and Decentralized Web events.
🗓️ ️DIF Members Member Spotlight: NuggetsNuggets, a DIF member company, is pioneering innovative solutions at the intersection of AI and digital identity. In a recent interview, CEO Alastair Johnson details how their new Private Personal AI and Verified Identity for AI Agents products are tackling critical privacy and security challenges in AI adoption. Through decentralized identity wallet technology, Nuggets enables users to maintain complete control over their personal data while interacting with AI systems, while also providing verifiable digital identities for AI agents to ensure accountability and trust. Read the full interview.
Member Spotlight: The Camino Network FoundationThe Camino Network Foundation, a Switzerland-based non-profit, is revolutionizing the global travel industry through its specialized Layer 1 blockchain infrastructure. In a recent DIF member spotlight, they discuss their mission to tackle key industry pain points including high distribution costs, inefficient payment processes, and lengthy market entry times. Through their blockchain-based ecosystem and self-sovereign identity solutions, Camino aims to create seamless, secure travel experiences while reducing fraud and protecting user privacy. Read the full interview.
👉Are you a DIF member with news to share? Email us at communication@identity.foundation with details.
🆔 Join DIF!If you would like to get in touch with us or become a member of the DIF community, please visit our website or follow our channels:
Follow us on Twitter/X
Join us on GitHub
Subscribe on YouTube
🔍
Read the DIF blog
New Member OrientationsIf you are new to DIF join us for our upcoming new member orientations. Find more information on DIF’s slack or contact us at community@identity.foundation if you need more information.
I recently wrote about “How My Values Inform Design”. There I discussed the issue of autonomy and how it can be supported by progressive trust, proof against coercion, and other rights. One of the technical elements that I mentioned as a requirement was “[Tools that] enable meaningful participation in the digital economy.”
This is the freedom to transact. It’s a right that remains conspicuously absent from the foundational rights enshrined in the Universal Declaration of Human Rights (UDHR), which stands as a pillar of human dignity and freedom. But without it, the other rights articulated in the UDHR risk being rendered ineffective or hollow.
It’s so crucial because economic agency forms the bedrock upon which many fundamental freedoms rest. For instance, Freedom of Movement and Residence, core to personal autonomy, become less meaningful when an individual cannot engage in transactions necessary to secure housing or travel. Similarly, the right to property—the ability to own, buy, and sell—is directly dependent on the freedom to transact. Without access to economic exchange, these rights are significantly curtailed, reducing individuals to passive observers rather than active participants in their own lives.
Consider constitutional liberties like Freedom of Expression and Peaceful Association. These rights presuppose economic participation: the ability to rent venues, purchase communication tools, and access the materials necessary for organizing and disseminating ideas. Without the economic means to support these activities, these freedoms are stripped of their practical utility.
The introduction of an international right to Freedom to Transact would bolster the entire framework of human rights by guaranteeing that individuals can exercise their freedoms without undue restrictions on their economic autonomy. It would ensure that human dignity, as envisioned in the UDHR, is not constrained by arbitrary barriers to economic agency. By codifying this right, we would affirm that economic freedom is as essential to the human condition as freedom of thought, religion, or expression.
This proposed right would also address systemic inequalities and empower marginalized communities by ensuring that all individuals, regardless of nationality, socioeconomic status, or geographic location, can engage fully in the global economy. In doing so, we cement the idea that economic agency is not a privilege but a fundamental human right.
As the world grapples with digital transformation, financial innovation, and increasing geopolitical complexities, the necessity of a Freedom to Transact has never been clearer. It is time to elevate this principle to its rightful place alongside the other freedoms in the UDHR, securing a more equitable and dignified future for all.
Ngā mihi nui kia koutou katoa, warm greetings to you.
I do hope you’ve had some R&R over the holiday break. However long it was, it’s never long enough!
DINZ has powered into the New Year early with its traditional Summer Series, which started last week to take advantage of global standards expert and my longtime friend Andrew Hughes being in the country for ISO SC37 (Biometrics) meetings. Despite holidays we had a fantastic turnout for ‘Deepfakes & ID verification: Your standards survival kit for the modern age’. View the video recording here.
Next in the Summer Series is ‘Payments for the Next Generation’, led by DINZ member Payments NZ. This session aims to inform and seek feedback from the DINZ community on the digital identity component of its strategic paper currently out for consultation.
The following months will see sessions led by other DINZ members relating to the Digital Identity Services Trust Framework and the digital trust ecosystem more broadly. Stay tuned for further announcements. Speaking of members, it’s great to welcome recent new members Cianaa Technologies and SecYour. Among other things Cianaa is in the business of evaluating service providers under the DISTF and SecYour is in the business of providing identity services. Just coincidental; no connection between them is implied!
Don’t forget DINZ’s monthly virtual Coffee Chat series starts next week. Last year’s registrants were given priority and are re-registering. So don’t delay, register now.
The year-end brings a flurry of public sector announcements and this year was no exception. First up was the NZ Banking Association’s announcement of the launch of GetVerified – the name for the confirmation of payee service that banks will be progressively rolling out. I’ve selected this post to give you the contextual low-down. Then just before the break, the Office of the Privacy Commissioner announced the long expected Code of Practice for Biometrics. As you’ll see from our submission page, DINZ has consistently argued for detailed guidance first, with regulation for exceptions or repeated poor implementation, because regulation creates some negative effects with unintended consequences. Nonetheless, we are where we are and DINZ will help members however it can.
On the international front NHI (Non Human Identity, once termed NPE ‘Non Person Entity’) is happily returning to ‘top of mind’ with this from OWASP. Romek also covered this in his weekly email and my long-time fellow Identerati travellers Mike Schwartz and Heather Flanagan shared this very interesting post (note that with her IDPro hat on, Heather interviewed DINZ Exec Councillor Abhi Bandopadhyay). Again, I’ve chosen a link that I think provides more colour and flavour to the discussion. And with matters raised here very much in mind, take a look at this announcement from DINZ liaison member OWF as we progress towards the pointy end of digital wallet development (is it ‘a thing’ long term though?).
Last but not least, DINZ Chair 2022-2024 Paul Platen drew key statistics from this SC Media article. Very poignant, posing the question what Aotearoa’s equivalent numbers would be. Take a look at Paul’s post here.
I’m looking forward to the year ahead, with projects in DINZ Working Groups and Special Interest Groups, and sharing co-created papers on barriers to Fintech innovation and competition that we are undertaking in collaboration with relevant public and private sector bodies. With Minister Bayly’s oversight I think this will lead to positive change. All up, I really do think that 2025 can be the year we make Digital Trust real.
Read the full news here: Welcome to DINZ 2025!
SUBSCRIBE FOR MOREThe post Welcome to DINZ 2025! appeared first on Digital Identity New Zealand.
Learn more about the Trust Framework for Digital Identity in New Zealand – building trust in digital identity services in New Zealand.
Visit the Department of Internal Affairs website.
The post Digital Identity Services Trust Framework appeared first on Digital Identity New Zealand.
PayPal has remained at the forefront of the digital payment revolution for more than 25 years by creating innovative experiences that empower over 400 million consumers and merchants to move money easily and securely.
Safety is a cornerstone of our global operations, and we are committed to protecting our users across the approximately 200 markets that we serve. In this piece, we detail the latest developments in authentication security and share recommendations for policymakers to enable increased safety in the digital economy.
Last week Apple default-enabled all apps (native and 3rd party) to Siri “learning”. I thought I’d wrap up with a reflection on Apple’s action and whether the concern was warranted.
The initial reaction for most of us was, “Oh great. Yet another forced offering to the great AI gods. No thank you!” I and others had a strong kneejerk reaction of “this is NOT ok.” Was the initial reaction warranted? Figure 1 shows my original LinkedIn posts below, which were updated real time as I continued to explore the situation.
Figure 1
When I took a closer look, Apple clearly tries to make Siri as “edge-y” as possible—i.e. executing independently on the device to the extent possible. But how safe is the architecture? What data exactly gets shared and with what parts of Apple’s infrastructure? This is the problem with all the large platforms: we just can’t observe server-to-server behaviors within the infrastructure.
Here’s what I do know. Apple’s action was surprisingly presumptuous and disrespectful to their users. It was markedly off-brand for the privacy-evangelizing company, and their actions (or inactions, as the case may be) since the time this flared up are telling.
After the dust has settled, I stand by my recommendation to disable Siri learning, and I’m less concerned about Siri suggestions. Here’s why.
Apple did this in a sneaky way. Not at all on-brand for a privacy-touting company. It’s at least the second time Apple behaved in this way in recent days. On January 3, 2025, The Register reported that Apple auto-opted everyone into AI analysis of their photos https://www.theregister.com/2025/01/03/apple_enhanced_visual_search/ You may have missed it, but on January 8, 2025 Apple issued a press release pretty much saying, “Siri’s super private—hooray!” This is most likely when the auto-opt in happened, though I have no supporting evidence. https://www.apple.com/newsroom/2025/01/our-longstanding-privacy-commitment-with-siri/ Coincidence? Hard to think so. Also the language in the press release raises more questions than answers:Figure 2: Source https://applemagazine.com/siri-engine-revamp-what-apples-next/
One starts to wonder, what isn’t Siri when it comes to user interface and interaction? And also wondering where Siri begins and ends in terms of software execution and data access. This year is already touted as the year for agentic AI—what could be more agentic than AI-infused Siri?
For me, the biggest smoking gun is that late last year Apple announced that Siri will be powered by Apple’s LLM (https://superchargednews.com/2024/11/21/llm-siri-to-launch-by-spring-2026/ ). First off “powering Siri” with Apple’s LLM already sets off some alarm bells. The timing of this forced opting–in to Siri Learning seems quite aligned with the development timeline of an LLM said to be launching in late 2025/2026. “The new Siri will be powered by Apple’s advanced Large Language Models (LLM), which will make the digital assistant more conversational and ChatGPT-like.” I can’t really imagine a world where Apple wouldn’t train their LLM off their current customer base. Siri Architecture: Because I love architecture and because I can remember when Siri was a baby 3rd party app, I wanted to go back and take a brief look at its evolving architecture. As of 2017, the Siri architecture was a poster child of typical app client-server architecture and the reliance on the server is clear (Figure 2), with even the trigger words audio being sent to the server.Figure 3: Source https://machinelearning.apple.com/research/hey-siri
2017 is, of course, ancient times in developer years and Apple has been relatively transparent about how they’ve been rearchitecting Siri to [at least] keep the trigger word detection on the device (https://machinelearning.apple.com/research/voice-trigger).
The last thing I want to mention is that Apple has been preternaturally silent about this whole thing. I find that remarkably off-brand. Unless Siri and Apple AI are inextricably interwoven and this was actually a training set creation exercise, in which case, probably best to keep silent.Is Apple training their LLM via the forced Siri Learning opt-in? Maybe. Will be good to hear from them on this. And while we’re at it, I’d love a new “revamped” Siri and Apple LLM architecture diagram/document, with greater transparency and detailed information on the functionality distribution and data sharing between the device and back-end servers and services. Please and thank you.
The post One Bad Apple—Automatically Opting Users into AI Training appeared first on Internet Safety Labs.
Traceability and supply chain integrity are more than just buzzwords—they’re the backbone of patient safety and industry innovation.
In this episode, host Reid Jackson welcomes Gary Lerner, Founder and CEO of Gateway Checker, to explore the transformative power of the Drug Supply Chain Security Act (DSCSA).
They discuss how the shift from lot-level to item-level traceability is revolutionizing healthcare, providing unprecedented safeguards against counterfeiting and channel diversion. From the mechanics of 2D barcodes to the role of AI in analyzing supply chain data, Gary shares practical insights from his 20+ years of experience navigating the intersection of digital and physical supply chains.
In this episode, you’ll learn:
How DSCSA is transforming healthcare supply chains with item-level traceability
The critical role of 2D barcodes in ensuring authenticity and patient safety
Why data quality and interoperability are the next big steps for supply chain efficiency
Jump into the conversation:
(00:00) Introducing Next Level Supply Chain
(02:18) From brand protection to healthcare supply chains
(03:37) Innovating item-level traceability in goods
(05:15) The importance of supply chain integrity in healthcare
(06:48) Breaking down the “license plate” of pharmaceuticals
(09:23) Understanding DSCSA and its impact on patient safety
(12:38) How bad actors exploit supply chain gaps
(13:49) Counterfeit prevention as part of national security
(16:10) Using interconnectivity to uncover supply chain risks
(19:16) Best practices for adapting to DSCSA regulations
(21:09) How serialization enables better inventory accuracy
(26:05) The future of supply chain integrity and AI innovation
Connect with GS1 US:
Our website - www.gs1us.org
Connect with Guest:
Gary Lerner on LinkedIn
The Core of Passkey Technology
Passkeys, a breakthrough in the realm of digital security, eliminate the vulnerabilities of password-based systems. Utilizing cryptographic key pairs, passkeys are designed to safeguard user identities without relying on shared secrets. The system operates on a challenge-response mechanism: a private key stored securely on the user’s device interacts with a public key on the service provider’s server. This interaction ensures that sensitive credentials are never exposed, making passkeys inherently resistant to phishing attempts and credential theft.
This technology is underpinned by the FIDO2 standard, which comprises WebAuthn and the Client-to-Authenticator Protocol (CTAP). WebAuthn facilitates seamless integration of passkeys into web applications, while CTAP supports communication between devices and authenticators, ensuring flexibility and security. Together, these components offer a standardized and robust framework for passwordless authentication across various platforms.
Since the inception of the internet, passwords have been the primary authentication factor to gain access to online accounts. Yubico’s recent Global State of Authentication survey of 20,000 employees found that 58 percent still use a username and password to login to personal accounts, with 54 percent using this login method to access work accounts.
This is despite the fact that 80 percent of breaches today are a result of stolen login credentials from attacks like phishing. Because of this, passwords are widely understood by security experts as the most insecure authentication method that leaves individuals, organizations and their employees around the world vulnerable to increasingly sophisticated modern cyber attacks like phishing.
Recent academic research has revealed new insights into the security considerations surrounding FIDO2 authentication and synced passkeys, highlighting both the strengths and potential vulnerabilities of current authentication systems. The analysis comes at a time when major technology companies are increasingly adopting passkey technology, with Microsoft reporting login times three times faster than traditional passwords.
Formal methods analysis of the FIDO2 standard has revealed potential weaknesses in the underlying protocols that warrant attention from security professionals. The research particularly focuses on the implementation of synced passkeys, which enable cross-device access through passkey providers. These findings support recent expert warnings about interoperability concerns in FIDO2 implementations.
You probably have a lot of passwords in your life.
Even with the help of password managers, passwords are becoming more and more of a burden for most people.
Long gone are the days of being able to use and reuse rubbish passwords like p455w0rd123. Now, all of your online accounts need to be protected by passwords that are complex and unique.
Also: Passkeys take yet another big step towards killing off passwords
You also need to be ever vigilant in case one of your many passwords is compromised.
There’s a better solution: Passkeys.
The digital economy continues to rely on password-based authentication, but password weaknesses — and human nature — make them horrible for security. Password use also impacts businesses’ bottom lines because every year, forgotten passwords and password resets result in millions of dollars of lost sales and wasted IT staff hours.
It’s a “password tax” on businesses and consumers that no one can seem to get past.
As the digital economy has grown, so has the value associated with passwords. As a result, phishing and credential theft continue to run rampant, with stolen credentials sold openly on the dark web.
To protect people, organizations add more friction and worsen UX. They ask users to create long and complex passwords, change passwords every few months and use MFA. This results in lost sales, reduced company productivity and added costs.
A secure alternative to the password has emerged: passkeys. This option can strengthen organizations’ security posture because passkeys have the potential to generate billions in revenue and cost savings for businesses.
A Presidential Document by the Executive Office of the President on 01/17/2025
Executive Order on Strengthening and Promoting Innovation in the Nation’s Cybersecurity
The WebAuthn standard was called out by name in a new cybersecurity executive order (EO) that was released by the White House – in the final days of the Biden Administration. Among other things, the new EO effectively codifies a previous 2022 policy memo that called for the US government to use only phishing-resistant authentication.
Cybersecurity experts are calling on New Zealand businesses to strengthen their defences as cyber threats grow in sophistication.
Key developments, such as AI-driven phishing, the adoption of digital identity wallets, and the shift to passkey authentication, are reshaping the cybersecurity landscape. These trends, combined with rising attack frequencies, require organisations to adopt proactive measures and align with evolving regulatory standards.
Global leaders, including experts from Yubico, and local organisations like CERT NZ and the National Cyber Security Centre (NCSC), have identified critical areas of focus for 2025. These include:
combating increasingly sophisticated attacks implementing modern authentication methods prioritising board-level involvement in cybersecurity strategiesMore than 1 billion people have activated at least one passkey according to the FIDO Alliance – an astonishing number that highlights the quick evolution of passkeys from a buzzword to a trusted login method. In just two years, consumer awareness of the technology jumped from 39% to 57%. Let’s see how passkeys have moved to mainstream.
Passkeys are the future of authentication, offering enhanced security and convenience over passwords, but widespread adoption faces challenges that the NCSC is working to resolve.
What’s wrong with passwords – why do we need passkeys?Most cyber harms that affect citizens occur through abuse of legitimate credentials. That is, attackers have obtained the victim’s password somehow – whether by phishing or exploiting the fact the passwords are weak or have been reused.
Passwords are just not a good way to authenticate users on the modern internet (and arguably weren’t suitable back in the 1970s when the internet was used by just a few academics). Adding a strong – phishing-resistant – second factor to passwords definitely helps, but not everyone does this and not every type of Multi-Factor Authentication (MFA) is strong.
Growing enterprise reliance on biometric and token-based authentication propels the passwordless market forward. Providers innovate frictionless FIDO2/WebAuthn solutions, boosting collaboration between fintech, retail, and the public sector, while unresolved interoperability hinders the seamless global rollout of passkey technologies.
The study illustrates successful implementations of CIAM solutions across various verticals and use cases. This report’s geographic coverage is global. The study period is 2023-2029, with 2024 as the base year and 2025-2029 as the forecast period.
The report defines consumer identity and access management (CIAM) as a framework that controls and manages consumer identities, access, and policies across IT infrastructures to protect enterprises from unauthorized and potentially harmful security breaches. CIAM solutions include single sign-on, multi-factor authentication, identity verification, lifecycle management (provisioning, deprovisioning), password management, and compliance management.
Highlights:
Fraudulent job applicants posing as IT workers from countries like North Korea have infiltrated organizations, posing significant security risks. HYPR encountered a potential fraud attempt during its onboarding process and successfully thwarted it using its Identity Assurance platform. HYPR’s use of multi-layered identity verification, including biometrics and video verification, helped prevent the fraudulent hire from gaining access to their systems. This issue is not limited to North Korea plots; fake workers and interview fraud are widespread and growingIn 2024, the OriginTrail ecosystem achieved remarkable milestones, driving innovation in decentralized knowledge and AI integration. The three stages (or impact bases) of the V8 Foundation were formulated as inspired by the legendary works of Isaac Asimov. They prophetically symbolize steps towards the future where the reliable and trusted knowledge base or Collective neuro-symbolic AI drives synergies between AI agents and the Autonomous Decentralized Knowledge Graph (DKG) in a human-centric way.
The updated roadmap recaps the most important achievements of the past year, and highlights the road ahead (full updated roadmap available here).
The year kicked off with the establishment of the Impact Base: Trantor (home to the Library of Trantor, where librarians systematically indexed human knowledge in a groundbreaking collaborative effort), which catalyzed key advancements, including Knowledge Mining, which introduced Initial Paranet Offerings (IPOs) and autonomous knowledge mining initiatives. Simultaneously, the release of delegated staking enabled TRAC delegation for network utility and security, enhancing inclusivity and participation in DKG infrastructure.
Following Trantor, Impact Base: Terminus was activated with key catalysts for adoption, including multichain growth, integrating DKG with the Base blockchain ecosystem, and implementing transformative scalability solutions such as asynchronous backing on NeuroWebAI blockchain on Polkadot and batch minting features.
The introduction of ChatDKG.ai revolutionized interaction with DKG and paranets, integrating AI models across platforms like Google Vertex AI, OpenAI, and NVIDIA. Meanwhile, the release of Whitepaper 3.0 outlined the vision of a Verifiable Internet for AI, bridging crypto, Web3, and AI technologies to address misinformation and data integrity challenges.
The deployment of OriginTrail V8 and its Edge Nodes brought Internet-scale to the ecosystem. Edge Nodes redefine how sensitive data interacts with AI-driven applications, keeping it on devices while enabling controlled integration with both the DKG and neural networks. This privacy-first architecture facilitates local AI processing, ensuring secure utilization of private and public knowledge assets. In addition, OriginTrail V8 achieves monumental scalability improvements with the random sampling proof system that reduces on-chain transaction requirements by orders of magnitude thus boosting the DKG’s throughput in a major way.
The DKG V8 provides a powerful substrate to drive synergies between AI and collective-neuro symbolic AI capable of driving AI agents’ autonomous memories and trusted intents, as both AI agents and robots alike become potent enough to act on behalf of humans.
Roadmap for 2025 and beyond: Advancing collective neuro-symbolic AI with the DKGThe 2025 roadmap marks a leap forward for the OriginTrail ecosystem, as the Decentralized Knowledge Graph (DKG) becomes the cornerstone for collective neuro-symbolic AI, a powerful fusion of neural and symbolic AI systems.
With the establishment of Impact Base: Gaia, the roadmap envisions the system functioning as a super-organism, where decentralized AI agent swarms share and expand their collective memory using the DKG. This shared memory infrastructure, combined with the autonomous inferencing and knowledge publishing capabilities of DKG V8, lays the foundation for decentralized AI that seamlessly integrates neural network and knowledge graph reasoning with trusted, verifiable knowledge. The result is a robust AI infrastructure capable of addressing humanity’s most pressing challenges at an accelerated pace.
At the heart of this vision lies the Collective Agentic Memory Framework, enabling autonomous AI agents to mine, publish, and infer new knowledge while ensuring privacy and scalability. This vision is enabled by establishing scalable infrastructure and tools such as AI agent framework integrations (such as the ElizaOS DKG integration), the NeuroWeb Collator staking and bridge, and DKG Edge node private knowledge repositories.
Those who invest in using the DKG, build the DKG: 60MM TRAC Collective Programmatic Treasury (CPT)The roadmap also introduces decentralized growth through initiatives like the Collective Programmatic Treasury (CPT), allocating 60 million $TRAC over a Bitcoin-like schedule to incentivise an ecosystem of DKG developers based on a meritocratic system of knowledge contribution.
The quote taken from The Matrix EndingAs adoption spreads across industries such as DeSci, robotics, healthcare, and entertainment, this interconnected ecosystem drives network effects of shared knowledge, exponentially amplifying the collective intelligence of AI agents. By aligning decentralized AI efforts with the DKG’s unifying framework, OriginTrail unlocks the potential for Artificial General Intelligence (AGI) through the synergy of all human knowledge, creating a future where AI reflects the full spectrum of human insight and wisdom.
Impact base: Gaia (established in Q1 2025)The human beings on Gaia, under robotic guidance, not only evolved their ability to form an ongoing telepathic group consciousness but also extended this consciousness to the fauna and flora of the planet itself, even including inanimate matter. As a result, the entire planet became a super-organism.
DKG V8
Scalable and robust foundation for enabling next stage of Artificial Intelligence adoption with decentralized Retrieval Augmented Generation (dRAG), combining symbolic and neural decentralized AI. DKG V8 is catalysing the shift from attention economy to intention economy.
✅ DKG Edge Nodes
✅ New V8 Staking dashboard
✅ New V8 DKG Explorer
✅ Batch minting (scalability)
Random sampling (scalability) Collective Agentic memory framework Eliza integration (Github) ChatDKG Framework for AI Agent Autonomous Memory NeuroWeb Bridge integration NeuroWeb Collators RFC-23 Multichain TRAC liquidity for DKG utility C2PA global content provenance standard complianceCatalyst 1: Autonomous Knowledge Mining
Mine new knowledge for paranets autonomously by using the power of symbolic AI (the DKG) and neural networks.
AI-agent driven Knowledge MiningCatalyst 2: DePIN for private knowledge
Keep your knowledge private, on your devices, while being able to use it in the bleeding edge AI solutions.
Private Knowledge Asset repository for agents (DKG Edge Node) Private data monetization with Knowledge Assets and DPROD Convergence (2025 +)With the Genesis period completed the OriginTrail DKG will have a large enough number of Knowledge Assets created (1B) to kickstart the “Convergence”. Leveraging network effects, growth gets further accelerated through autonomous knowledge publishing and inferencing capabilities of the DKG, fueled by decentralized Knowledge Mining protocols of NeuroWeb and AI Agents supported by multiple frameworks integrating the DKG. During the Convergence, supported by OriginTrail V8 with AI-native features and further scalability increase, the OriginTrail DKG grows the largest public Decentralized Knowledge Graph in existence, a verifiable web of collective human knowledge — the trusted knowledge foundation for AI.
Collective Neuro-Symbolic AI (DKG)Collective Global memory: Autonomous Decentralized Knowledge Graph
Incentivized autonomous enrichment of human knowledge using neural network reasoning capabilities over a large body of trusted knowledge. Providing AI infrastructure that allows any of the most pressing challenges of human existence to be addressed in an accelerated way.
Future development fund decentralization “Those invest in the DKG, shall build the DKG” — 60,000,000 $TRAC allocated using the Bitcoin schedule over X years with the Collective Programmatic Treasury (CPT)
Autonomous Decentralized Knowledge Inferencing
Knowledge graph reasoning Graph neural network framework Neuro-symbolic inferencing combining GenAI with symbolic AIAutonomous Knowledge Mining
Autonomous knowledge publishing with DKG inferencing Additional AI-agent integrationsExtending DKG-powered AI Agents to physical world through robotics
Collective Neuro-Symbolic AI (DKG) adoption 2025 + Autonomous AI agents Decentralized science (DeSci) Robotics and manufacturing (DePin) Financial industry Autonomous supply chains supported by Global Standards Construction Life sciences and healthcare Collaboration with internationally recognized pan-European AI network of excellence (EU supported) Metaverse and entertainment Doubling down on OriginTrail ecosystem inclusivity Activating the Collective Programmatic Treasury Driving safe Internet in the age of AI inclusively with the leading entities in the industry*The list is non exhaustive
👇 More about OriginTrail 👇
Web | Twitter | Facebook | Telegram | LinkedIn | GitHub | Discord
2025 Roadmap update: Synergy of AI agents and autonomous DKG was originally published in OriginTrail on Medium, where people are continuing the conversation by highlighting and responding to this story.
Join our community event introducing the Digital Resilience Hub on 21 January 2025.
The post Are you looking to foster meaningful connectivity in your community in 2025? Join our Digital Resilience Hub online event appeared first on The Engine Room.
We’ve often written in our yearly reports that the goal of Blockchain Commons is “the creation of open, interoperable, secure & compassionate digital infrastructure to enable people to control their own digital destiny and to maintain their human dignity online”. Our architecture is ultimately based on these and other values, as Christopher wrote in “How My Values Inform Design”, which we published at the start of this year.
Obviously, we can’t just create infrastructure on our own, so we’ve worked toward our values-focused goals by bringing principals together for regular developer meetings and by advocating for the adoption of specifications that we believe achieve these goals (and in many cases by creating those specifications ourselves).
In 2024, much of our work in this regard focused on three different specifcations: dCBOR, Envelope, and FROST.
dCBORdCBOR is our deterministic profile for the CBOR data format. We adopted CBOR itself for a variety of reasons, including it being self-describing, extensible, and great for constrained devices. However it had one lack (for our purposes): you couldn’t guarantee that two different devices would encode the same data in the same way, at least not for a variety of weird edge cases (such as whether “1.0” and “1” get encoded the same way, and what you should do for maps with illegally duplicated keys, and how to deal with NaN). This was problematic when we were developing Gordian Envelope, which depends on the same data always being represented the same way so that it always hashes the same—hence the need for dCBOR.
dCBOR received a lot of attention from us in the first half of 2024. We went through five drafts of the specification, revising some of our original definitions and incorporating things like Unicode Normalization Form C (NFC). We’re very grateful to our partners from the IETF CBOR group who laid a foundation for us with the CBOR Common Deterministic Encoding draft, which has been advancing in parallel with dCBOR, and who helped us fill in these gaps in dCBOR itself. We increasingly think we have a strong foundation for deterministic CBOR, as seen by its incorporation into the CBOR playground and a variety of libraries.
Implementation by two or more parties is always our mark of success for a Blockchain Commons specification, and we exceeded that for dCBOR in 2024.
EnvelopeOf course dCBOR is just the prelude. Our work on dCBOR was done to make Gordian Envelope a reality. Gordian Envelope is our “smart document” format for the storage and transmission of data. It’s focused on privacy and more specifically on the goals of data minimization & selective disclosure. More and more personal data is going online, and so we need data formats where we each control our own data and decide what to disclose to whom. That’s what the hashed data elision of Gordian Envelope allows: any holder of data can choose what’s redacted, without changing the validity of the data package.
One thing we increasingly saw in 2024 was the need to better explain Envelope: its advantages, how it works, and why it’s important. We kicked that off with a presentation to the IETF on why hashed data ellision is important. We also produced a trio of videos on Envelope: a teaser, “Understanding Gordian Envelope” and “Understanding Gordian Envelope Extensions”.
Meanwhile, we’re continuing to extend Gordian Envelope.
To start with, we’ve done a lot more with the Gordian Sealed Transaction Protocol (GSTP) extension that we introduced right at the end of 2023, to support the transmission of sensitive data across telecommunications means that are insecure, unreliable, or both! A feature presentation on the underlying Request/Response system, explained the fundamentals followed by a feature presentation on GSTP itself, which detailed what GSTP looks like now. Our new GSTP developer page has even more info. We’ve also done some investigation into using GSTP with MuSig2.
We also introduced XIDs, or Extensible Identifiers, a new decentralized identifier that’s built using Envelopes. You can experiment with it in code with our new bc-xid-rust library. Decentralized identifiers have been one of our major interests since the topic was first considered at the first Rebooting the Web of Trust workshop, but they’ve never been a big focus at Blockchain Commons due to the simple fact that we haven’t had a patron focused on the topic. (If that could be you, drop us a line!) We were happy to finally offer a little bit of our own on the topic by presenting an ID that’s really decentralized.
Other Envelope experiments in 2024 were about graphs, decorrelation, and signatures. The privacy preserving Envelope format has a lot of legs. We’re looking forward to seeing how they’re used in the years to come!
Teaser:FROST was the third major specification that Blockchain Commons focused on in 2024. This is a specification that we didn’t develop ourself: it’s based on a paper by Chelsea Komlo and Ian Goldberg. However, we think it’s very important because it allows for resilient, compact, private multisigs, and so we’ve been giving it all the support that we can.
Our prime work here was holding three FROST meetings in 2024 (the last two sponsored by the Human Rights Foundation). We held one meeting for implementers, to help them share information on the continued design of FROST, and we also held two meetings for developers, focused on helping them to actually get FROST into wallets in the next year. Our recordings and presentations from the meetings are full of great FROST resources from an introduction that we put together to a presentation from the first major wallet to implement FROST.
We’ve also done some work to incorporate FROST into our own Rust and Swift Gordian stacks as a reference, by moving over to fully BIP-340 compliant Schnorr signatures. We’d like to do more, including creating a signtool
reference, but we’ll have to see if that opportunity arises in 2025.
Of course, FROST is just a single member of the new class of Multi-Party Computation (MPC) technology. MuSig falls into the same category, as another Schnorr-based technology where key creation is divided among multiple locations. We talked a bit about MuSig and how it could work with Gordian Envelope at our November Gordian meeting.
We think that all of the Schnorr variants are pretty important for the future of the digital world because of their advantages in resilience, security, and privacy.
Keeping it RealObviously, not all of Blockchain Commons’ work is focused on high-level specifications. We also produce ready-to-go libraries that developers can use to implement our specs, as well as reference apps that can be used for experimentation and as examples of best practices.
Our app releases in 2024 included Gordian Seed Tool 1.6 (and more recently Gordian Seed Tool 1.6.2), Gordian Server 1.1, and the brand-new seedtool-cli for Rust (with a full user manual). We’ve also updated our entire Swift stack to Swift 6 and continued the development of Rust libraries such as bc-dcbor-rust, bc-depo-rust, bc-envelope-rust, bc-components-rust, and many more!
Hello to the Wider Wallet CommunityOur work has always focused on interactions with a larger community, from our IETF work on dCBOR to our Envelope work with our own Gordian community to our FROST meetings.
In 2024, we were thrilled to expand that community to include even more experts in the world of digital assets and identity. Besides our trio of FROST meetings, which brought together most of the principals in that space, we also hosted other expert-talks at our Gordian Developer meetings. That included a presentation on BIP-85 by Aneesh Karve, a look at the Ledger Seed Tool (which uses Blockchain Commons technology) by Aido, and an overview of Payjoin from Dan Gould.
BIP-85:If you have a topic that might be of interest to wallet developers, drop us a line, as we’d love to welcome you to the Gordian Developers community and get you on the schedule for a presentation in 2025.
A Focus on IdentityAs we said, decentralized identity (including decentralized identifiers) has been a topic of interest to Blockchain Commons since before the foundation of the organization. Christopher Allen previously founded the Web of Trust workshops specifically to investigate how to fulfill the promise of PGP by returning to the idea of decentralized, peer-to-peer identity. Over the course of a dozen workshops, the conference germinated and developed the idea of DIDs and also supported the development of Verifiable Credentials.
Though we’ve never had a sustaining sponsor focused on identity at Blockchain Commons, we’re still involved in these topics as of 2024. Christopher is an Invited Expert to the new W3C DID 1.1 Working Group and participated in the face-to-face plenary at TPAC. He also is an Invited Expert to the Verifiable Claims 1.1 Working Group. Our biggest presentation to the W3C community last year was on how to create DID documents in dCBOR and what the possibilities are for using Envelope to add elision to DID Controller Docs and Verifiable Claims. This inspired our own work on XIDs, which is our own potential proof-of-concept for a DID 2.0 spec. We’ll be working more with these communities in the next year.
We also published a number of articles on decentralized identity in 2024 that were intended to either offer warnings about our current direction for identity or provide new ways forward.
Foremembrance Day Presentation Edge Identifiers & Cliques Open & Fuzzy Cliques Has our SSI Ecosystem Become Morally Bankrupt?We think that our articles on Edge Identifiers and different sorts of Cliques offer a whole new approach to identity. It models identity not just as something held by a singular individual, but as something shared between pairs and groups of people, who together share a context. (And this is another area that we’d love to do more work on: if you would too, let us know!)
Our other two articles focused on the dangers of identity. The Foremembrance Day article (and video) talked about how dangerous identity became in WWII, a topic we’ve used as a touchstone when considering what we’re creating today. Then our last article talked about how we think the modern identity work that Christopher kicked off with his foundational article on self-sovereign identity may now be going in the wrong direction.
Git Open Integrity ProjectOur biggest project that didn’t quite see release in 2024 was the GitHub Open Integrity project. The idea here is simple: use GitHub as a source of truth by turning a repo into a decentralized identifier. We’ve worked through some CLI setup programs and have a repo that demonstrates the verification we’re doing. We’ve also released the SSH Envelope program that we wrote that supported this project by allowing the signing and validation of SSH keys. (Since, we’ve incorporated the signing into the main envelope-cli-rust
).
We’re still working on a README that lays out the whole project, but at the moment it’s backburnered for other work …
Coming Soon!The last few years have been hard on our partners in the digital identity space, due to high interest rates sucking the money out of Web3 seed funds. That in turn has hurt their ability to support us. As a result, we’ve been applying for grants for some of our work.
At the very end of 2024, we had a grant application approved by the Zcash Foundation to produce an extensible wallet interchange format (“ZeWIF”) for Zcash. Though much of our work to date has been used by the Bitcoin ecosystem, advances like animated QRs, Lifehash and SSKR are of use to anyone working with digital assets (and likely a number of other fields). So, we’re thrilled to get to work with Zcash and share some of our principles such as independence, privacy, and openness (which were all part of our “ZeWIF” proposal). We hope this will also lead to adoption of some of our other specifications in the Zcash ecosystem. Obviously, more on this in Q1, as we have it laid out as a three-month project, running from January to March.
Though we’re thrilled with one-off grants like the HRF and Zcash grants that were approved in 2024, Blockchain Commons is also looking for more stable funding sources to continue our work making the digital-asset space safer and more interoperable. You can help this personally by lending your name to Blockchain Commons as a patron, but we’re also looking for companies who are interested in using and expanding our specs, who want to partner with us to do so. Contact us directly to discuss that!
With your support, we’ll be able to produce another report on great advance in 2026.
Picture Frames Designed by Freepik.
Marc Findon, Nok Nok Labs
Jonathan Grossar, Mastercard
Frank-Michael Kamm, Giesecke+Devrient
Henna Kapur, Visa
Sue Koomen, American Express
Gregoire Leleux, Worldline
Alain Martin, Thales
Stian Svedenborg, BankID BankAxept
Global e-commerce is booming and is expected to reach more than $6T by the end of 20241. Having the ability to sell products online has provided great opportunities for merchants to sell goods and services beyond their local market; however, it comes with increased fraud. In fact, it is estimated that in 2023, global ecommerce fraud was roughly to reach $48B1, with the US accounting for 42% of that and the EU with about 26%.
Download the White Paper1.1 Current Challenges in Remote Commerce
There are many types of ecommerce fraud, but the most prevalent type is transaction fraud. Transaction fraud occurs when a transaction is made on a merchant site with a stolen card and/or stolen credentials. Stolen credentials are readily available on the dark web to those who know how to access and use them.
To address those concerns, measures have been introduced to increase the overall security of remote commerce transactions, including tokenization of payment credentials and cardholder authentication. In some countries, regulations are mandating the adoption of either or both measures, such as in India or in Europe (second Payment Services Directive PSD2). These regulations are meant to ensure secure remote transactions; however, they add complexity to the checkout flow, as they may require a switch between the merchant and another interface, such as a bank’s interface.
Unfortunately, additional authentication may add friction which can result in cart abandonment. The main reasons for cart abandonment include a distrust in the merchant website or a complicated check out flow. Customers prefer a simple payment process that doesn’t add friction such as that caused by payment failure, the need to respond to a one-time password (OTP) on a separate device, or the need to login into a banking application.
1.2 How FIDO can help
The use of biometric authentication enabled through the Fast Identity Online (FIDO) Alliance standards is an opportunity to deliver a better user experience during the authentication process and hence reduce the risk of transaction abandonment.
FIDO has established standards that enable phishing-resistant authentication mechanisms and can be accessed from native applications and from the most popular browsers – thereby enabling a secure and consistent experience across the channels used by consumers. FIDO refers to ‘passkeys’ as the FIDO credentials based on FIDO standards, used by consumers for passwordless authentication.
The World Wide Web Consortium (W3C) has developed Secure Payment Confirmation (SPC). SPC is a web API designed to enhance the consumer experience when authenticating to a payment transaction using FIDO authentication, and to simplify compliance with local regulations (such as PSD2 and dynamic linking in Europe).
1.3 Scope
This whitepaper intends to:
Define Secure Payment Confirmation (SPC) and the benefits that it brings when FIDO is used to authenticate payment transactions1 https://www.forbes.com/advisor/business/ecommerce statistics/#:~:text=The%20global%20e%2Dcommerce%20market,show%20companies%20are%20taking%20advantage.
List the current SPC payment use cases that can deliver those benefits and illustrate consumer journeys Provide a status on SPC support and the list of enhancements that could be added to the web standard to further improve security and user experience 2. Secure Payment Confirmation (SPC) BenefitsSecure Payment Confirmation (SPC) is an extension to the WebAuthn standard, and aims to deliver the following benefits:
A browser native user experience that is consistent across all merchants and banks Cryptographic evidence of authentication (FIDO assertion) including transaction details signed by a FIDO authenticator Cross origin authentication – For example, even if passkeys are created with the bank as the Relying Party, merchants can invoke cardholder authentication with passkeys within their environment, using input parameters received from the bank, so there is no need to redirect the consumer to the bank to authenticate with passkeys.2.1 Browser Native User Experience
SPC introduces a standardized payment context screen showing details such as a merchant identifier, the card logo, the last 4 digits of the card number, and the transaction amount. The consumer is invited to explicitly agree to the transaction information displayed and then authenticate. Therefore, SPC can be experienced as a mechanism to collect consent from the consumer about the transaction details.
As in standard WebAuthn, the payment context screen is controlled by the user’s browser which renders common JavaScript presentation attacks ineffective. The screen provides increased security, as it ensures that malicious web content cannot alter or obscure the presentation of the transaction details to the user – the browser display always renders on-top of the web content from the triggering website. Figure 1 depicts an example of the SPC experience in chrome.
Figure 1 Example of SPC experience in chrome
2.2 Generation of FIDO Assertion
With SPC, the transaction-related information displayed to the consumer, such as the merchant identifier and transaction amount, is sent securely to the FIDO authenticator and is signed by the same authenticator (transaction data signing).
The FIDO assertion generated by the authenticator reinforces compliance with some regulations as it does with the dynamic linking requirement under PSD2 in Europe, because the merchant identifier and transaction amount will be signed by the authenticator itself. When combined with the browser native user experience described in section 2.1, the relying party can be confident that the user was shown and agreed to the transaction details.
2.3 Cross Origin Authentication
When using FIDO without SPC, a consumer that creates a passkey with a relying party will always need to be in the relying party’s domain to authenticate with that passkey. In the remote commerce payment use case, this means that the consumer typically needs to leave the merchant domain and be redirected to the bank’s domain for authentication.
With SPC, any entity authorized by the relying party can initiate user authentication with the passkey that was created for that relying party. For example, a merchant may be authorized by a bank to authenticate the cardholder with the bank’s passkey.
Note that the mechanism for the relying party to authorize an entity to invoke SPC may vary. For example, a bank may share FIDO credentials with the merchant during an EMV 3DS interaction or through another integration with a payment scheme. The merchant will then be able to use SPC to initiate the payment confirmation and authentication process with a passkey, even if that passkey was created with the bank. Ultimately, the bank maintains the responsibility to validate the authentication.
2.4 Interoperability With Other Standards
SPC can be used in combination with other industry standards such as EMV 3-D Secure and Secure Remote Commerce (SRC), both of which are EMVCo global and interoperable standards.
3. SPC Use CasesSPC can be used to streamline payments in a variety of remote commerce checkout scenarios such as guest checkout or a checkout using a payment instrument stored on file with a merchant.
In each of those payment scenarios, the relying party may be the issuer of the payment instrument (the bank), or a payment network on behalf of the bank.
The flows provided in this Chapter are for illustrative purposes and may be subject to compliance with applicable laws and regulations.
3.1 SPC With Bank as Relying Party
The creation of a passkey can be initiated outside of or during the checkout process:
Within the banking interface: For example, when the consumer is within the banking application and registers a passkey with their bank, in which case the passkey will be associated to one or multiple payment cards and to the consumer device Within the merchant interface: For example, when the consumer is authenticated by the bank during an EMV 3DS flow and is prompted to create a passkey with the bank to speed up future checkouts – in which case the passkey will be associated to the payment card used for the transaction (and to additional payment cards depending on the bank’s implementation), as well as to the device used by the consumerFigure 2 depicts the sequence (seven steps) of a passkey creation during a merchant checkout, where the merchant uses EMV 3DS and the consumer is authenticated by their bank:
Figure 2: Passkey creation during checkout
Once the passkey creation is complete, any merchant that has received the passkey information (which includes FIDO identifiers and Public Key) from the bank, through a mechanism agreed with the bank or the payment scheme, will be able to use SPC. Such a mechanism may include EMV 3DS or another integration with the payment scheme. For example, a merchant who implements EMV 3DS (i.e., version 2.32) will be able to benefit from SPC through the following steps:
1. When the merchant initiates EMV 3DS to authenticate the consumer, the bank decides whether an active authentication of the cardholder is necessary. If the decision is to perform the active authentication of the cardholder, the bank can first retrieve one or several passkeys associated with the card used for the transaction, verify that the consumer is on the same registered device, and then returns the passkey(s) information to the merchant.
2. The merchant invokes the SPC web API to a SPC-supporting browser, including a few parameters in the request, such as the passkey information, card / bank / network logos, the merchant identifier and the transaction amount.
3. If the browser can find a match for one of those passkeys on the device used by the consumer, the browser displays the logos, merchant identifier and the transaction amount to the consumer, and prompts for authentication with the passkey.
4. The authentication results are returned to the merchant, who in turn will share those results with the bank for validation through the EMV 3DS protocol.
Figure 3 depicts an example of an authentication flow using SPC and EMV 3DS, with a previously registered passkey:
Figure 3: Authentication sequence using SPC and EMV 3DS
3.2 SPC With Payment Scheme as Relying Party
In some payment scenarios, payment schemes can be a relying party on-behalf of the banks to remove the need for banks to deploy a FIDO infrastructure, thereby scaling the adoption of passkeys faster.
The creation of a passkey can be initiated outside of or during the checkout process:
Outside of the checkout: for example, when the consumer is within the banking application and the bank invites the consumer to create a passkey for faster and more secure transactions, the passkey can be created with the payment scheme as the relying party, and will be associated by the payment scheme to one or multiple payment cards and to the consumer device; or Before, during or after a checkout: for example, the consumer may be prompted to create a passkey for faster and more secure transactions at merchants supporting the payment scheme’s payment method. The passkey will be associated by the payment scheme to one or multiple payment cards and to the consumer device, once the identity of the consumer has been verified by the bank. Figure 4 depicts this sequence.Figure 4 Passkey creation during checkout
Once the passkey creation is complete, any merchant that is using the authentication facilitated by the payment scheme will be able to benefit from SPC:
The merchant checks with the payment scheme that a passkey is available for the card used in the transaction and retrieves the passkey information from the payment scheme. The merchant invokes the SPC web API with the merchant identifier and transaction amount. If the browser can find a match for one of those passkeys on the device used by the consumer, the browser displays the merchant identifier and the transaction amount to the consumer, card / bank / network logos, then prompts for authentication with the passkey. The authentication results are returned to the payment scheme that validates the results. The payment scheme shares those results with the bank, during an authorization message, for the bank to review and approve the transaction. Figure 5 shows this sequence.Figure 5: Authentication sequence using SPC
(left to right)
1. & 2. Checkout at the merchant’s store
3. Passkey is found, transaction details displayed and consent is gathered
4. Device authenticator prompts cardholder for gesture
5. Confirmation of gesture
6. Transaction completed by the merchant
3.3 Summary of SPC Benefits
The benefits provided by SPC include:
Cross-origin authentication – Any merchant authorized by a Relying Party can request the generation of a FIDO assertion during a transaction even when they are not the relying party. This provides a better user experience as there is no redirect that is required to the relying party to perform consumer authentication. Consistent user experience with increased trust – With SPC, the consumer has a consistent user experience across all merchants and independently of who plays the role of relying party. In each case, the consumer will see a window displayed by the browser, that includes payment details, the logos of their card / bank / payment scheme, increasing the trust in using FIDO authentication for their payments. Increased security – With SPC, the FIDO assertion will include payment details in the cryptogram generation such as the merchant identifier and transaction amount, making it difficult to modify any of those details in the transaction without being detected by the bank or payment scheme. This also simplifies the compliance with local regulations such as PSD2 regulation related to dynamic linking. 4. Status of SPC Support and Future Enhancement4.1 Availability
Secure Payment Confirmation is currently published as a W3C Candidate Recommendation, and there is on going work to include this as an authentication method in EMVCo specifications.
At the time of writing, the availability of the Secure Payment Confirmation API is limited to:
Google Chrome and Microsoft Edge browsers MacOS, Windows, and Android operating systems.4.2 Future Enhancements
The W3C Web Payments Working Group continues to work and iterate on Secure Payment Confirmation with the goal of improving the security and the user experience when consumers authenticate for payments on the web.
Features currently under consideration include:
Improve user and merchant experiences when there is not a credential available on the current device (i.e., a fallback user experience) Improve consumer trust with additional logos being displayed to the user, such as bank logo and card network logo Improve security with support for device binding, with the browser providing access to a browser/device-bound key Consider additional use cases such as recurring payments or support for roaming and hybrid FIDO authenticatorsAn example of enhanced SPC transaction UX that is under review is illustrated in Figure 6.
Figure 6: SPC transaction UX under review
5. ConclusionSecure Payment Confirmation (SPC) is a web standard that has been designed to facilitate the use of strong authentication during payment transactions with best-in-class user experience, where the relying party can be a bank or a payment scheme.
The main benefits of SPC are to deliver an improved user experience, with the display of transaction details that the consumer approves with FIDO authentication, and to enable cross-origin authentication when a merchant authenticates a consumer without the need to redirect to the relying party (the bank or the payment scheme).
SPC also facilitates the inclusion of the transaction details within the FIDO signature, which can help deliver higher security and/or simplify the compliance with local regulations.
6. AcknowledgementsThe authors acknowledge the following people (in alphabetic order) for their valuable feedback and comments:
Boban Andjelkovic, BankID BankAxept John Bradley, Yubico Karen Chang, Egis Jeff Lee, Infineon Olivier Maas, Worldline 7. References[1] “EMV 3-D Secure,” [Online]. Available: https://www.emvco.com/emv-technologies/3-d-secure/.
[2] “Secure Payment Confirmation,” [Online]. Available: w3.org/TR/secure-payment-confirmation/.
[3] “Secure Remote Commerce,” [Online]. Available: https://www.emvco.com/emv-technologies/secure remote-commerce/.
The paper was prepared by the scientific research provider for the program, „Begleitforschung Sichere Digitale Identitäten,“ led by the European School for Management and Technology (ESMT), on behalf of the BMWK.
1 Introduction to the Showcase Program Secure Digital Identities
The German Federal Ministry for Economic Affairs and Climate Action (BMWK) is the initiator and funder of the „Secure Digital Identities“ Showcase Program. Over the course of four years (2021 to 2024), the four showcase projects – IDunion, ID-Ideal, ONCE, and SDIKA – have worked on more than 100 use cases related to secure digital identities. These projects have also developed various types of wallets, which have been tested and implemented in multiple pilot environments. The Showcase Program has supported Research & Development (R&D) efforts, resulting in the creation of seven edge wallets, three organizational wallets, and one cloud wallet. Within this framework, IDunion has specifically focused on developing organizational identities, including use cases such as „Know-your-supplier.“ This paper outlines the cost savings achieved through automated supplier master data management, leveraging EUDI wallets for legal entities and the EU Company Certificate Attestation (EUCC) issued by QEAA providers in accordance with Company Law. The paper was prepared by the scientific research provider for the program, „Begleitforschung Sichere Digitale Identitäten,“ led by the European School for Management and Technology (ESMT), on behalf of the BMWK.
2 Cost saving estimation
Current situation: Currently corporations maintain their supplier and customer master data records manually, which is time-consuming and leads to errors and redundancies. Large corporations need to maintain and assure high quality of several hundred up to millions of master data records. The maintenance costs per single data set was estimated to 11 €/year. The master data set considered for the cost estimation was limited to company name and address data and therefore is a subset of the data that will be available with the PID for legal person and the EUCC.
Solution based on EUDIW: EU Digital Identity Wallets (EDIW), PID for legal entities and public registry extracts (e.g. EUCC) as QEAAs enable almost completely automated management of business partner data. Suppliers present their attestations from their legal entity wallet to customers or legal entity wallets of other business partners. Presentation, verification and the transfer to internal systems is performed automatically. This reduces the number of proprietary data records maintained in parallel and minimizes manual, error-prone data entry.
Cost savings: The solution enables annual savings of estimated €85 billion for German companies. Only German companies with more than 2 million sales/year where included in this estimation. It was assumed that only their European business partners provide their data as verifiable attestations. This underscores the transformative impact of the EUDIW solution on master data management and its strategic importance for the private sector on the path to digital efficiency.
Conservative assumptions for the estimation model below1
Estimation of master data sets: The estimation is done by estimating the number of potential B2B relationships of ompanies and assuming that a B2B relationship generates at least one master data set. In practice, however, master data is often stored and replicated in different systems. As this is not considered, the cost savings in the estimation are therefore calculated conservatively. Annual master data maintenance costs: On average, a company incurs annual costs of around €11 per master data maintenance. This estimation is based on an estimation performed by „Verband Deutscher Automobilhersteller“ (VDA). Number of master data sets for large companies: An average of 300,000 master data was assumed for large companies based on project estimates and VDA work. It was also assumed that 60% of the master data per company is attributable to the EU suppliers (i.e. 180,000 master data items on average for large companies) and therefore only these are relevant for the EUDIW-based solution. Scaling based on turnover: The estimated number of B2B relationships of large companies can be scaled to other company sizes based on turnover2 Implementation costs: The implementation costs are assumed to be €600 per year for small companies (<€10 million turnover). These costs are scaled to the larger company categories based on turnover. In addition to the implementation costs, companies must purchase the mentioned attestations (LPID, EUCC). The assumed costs are €1,000 per year. These costs are independent of the size of the company. Further implementation costs such as integration into ERP/CRM modules are neglected, as it is assumed that the market leaders will integrate the EUDIW modules accordingly. Very small companies: Due to their high number and heterogeneity in turnover and employee structure, very small companies are not included in the modeling, which leads to a more conservative savings estimate3Estimation model4
Potential savings for German companies5
Current costs for supplier master data maintenance€ 85.3bnImplementation costs€ 0.35bnAnnual costs for the EU Digital Wallet€ 0.25bnPotential total savings€ 84.7bnCurrent master data maintenance costs
Enterprise sizeNumberCosts per company6Total costsBig Enterprises715,500€ 2m€ 30.7bnSmall Medium Enterprises850,500€ 0.8m€ 40.4bnSmall Enterprises9185,000€ 0.08m€ 14.2bnTotal Maintenance Costs€ 85.3bnImplementation Costs
Enterprise SizeNumberCost per enterprise10Total CostsBig Enterprises15,500€ 6,300€ 98mSmall Medium Enterprises50,500€ 2,500€ 126mSmall Enterprises185,500€ 600€ 111mTotal Implementation Costs€ 335m (€ 0.35bn)Annual EUDI wallet costs
Enterprise SizeNumberCost per enterprise11Total CostsBig Enterprises15,500€ 1,000€ 15.5mSmall Medium Enterprises50,500€ 1,000€ 50.5mSmall Enterprises185,500€ 1,000€ 185mTotal Implementation Costs€ 251m (€ 0.25bn) Unless otherwise stated, the source is based on the calculations and statements of the IDunion project and theTrace Labs, the core builders behind the OriginTrail ecosystem, is pleased to announce the expansion of its advisory board with the addition of Fady Mansour, lawyer and partner with Friedman Mansour LLP and Managing Partner at Ethical Capital Partners. With his wide breadth of experience, Mr. Mansour brings important expertise in regulatory matters, particularly in online data protection.
In his advisory role, Mr. Mansour will provide strategic guidance to bolster OriginTrail’s strategic importance for combating illicit online content, safeguarding intellectual property, and fostering reliable AI applications for a safer digital landscape in its Internet-scale ambition.
OriginTrail ecosystem, powered by decentralized knowledge graph technology, is dedicated to promoting responsible AI and sustainable technology adoption. By joining the advisory board, Mr. Mansour will be instrumental in shaping Trace Labs’ mission to drive ethical, human-centric technological innovation across industries.
Mr. Mansour completes the Trace Labs advisory board of existing members:
Dr. Bob Metcalfe, Ethernet founder, Internet pioneer and 2023 Turing Award Winner; Greg Kidd, Hard Yaka founder and investor; Ken Lyon, global expert on logistics and transportation; Chris Rynning, Managing Partner at AMYP Venture — Piëch — Porsche Family Office; Toni Piëch, Founder & Chair of Board at Toni Piëch Foundation & Piëch Automotive; Fady Mansour, Managing Partner at Ethical Capital Partners.Trace Labs, Core Developers of OriginTrail, Welcomes Fady Mansour to the Advisory Board was originally published in OriginTrail on Medium, where people are continuing the conversation by highlighting and responding to this story.
In the realm of artificial intelligence (AI), particularly in robotics, trust is not just a luxury — it’s a necessity. The Three Laws of Robotics, conceptualized by the visionary Isaac Asimov, provide a well-known foundational ethical structure for robots:
A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.Ensuring these laws are adhered to in practice requires more than just programming; it necessitates a system where the knowledge upon which AI agents operate is transparent, verifiable, and trusted. This is where OriginTrail Decentralized Knowledge Graph (DKG) comes into play, offering a groundbreaking approach to enhancing the trustworthiness of AI.
Transparency and verifiabilityOne of the key aspects of the DKG is its capacity for transparency. By organizing AI-grade Knowledge Assets (KAs) in a decentralized manner, DKG ensures that the data AI agents use to make decisions can be traced back to their origins, with any tampering or modifications of that data being transparently recorded and verifiable on the blockchain. This is crucial for the First Law, where transparency in data sourcing can prevent AI from making decisions that might harm humans due to incorrect or biased information.
Ownership and controlThe DKG allows for each Knowledge Asset to be associated with a non-fungible token (NFT), providing clear ownership and control over the information. This aspect directly impacts how AI agents adhere to the Second Law. Namely, by allowing agents to own their knowledge, DKG empowers AI agents to respond to human commands based on a robust, reliable data set that they control, ensuring they follow human directives while also adhering to the ethical boundaries set by the laws. This capability also allows agents to monetize Knowledge Assets that they have created (i.e. charge other agents (AI or human) for accessing their structured data), enabling agents’ economic independence.
Contextual understanding and decision-makingThe semantic capabilities of DKG provide AI with a richer context for understanding the world — an ontological, symbolic world model to complement GenAI inferencing, which is vital for the Third Law. The interconnected nature of knowledge in the DKG means it is contextualized better, allowing AI to make decisions with a comprehensive view of the situation. For example, understanding the broader implications of self-preservation in contexts where human safety is paramount ensures that robots do not prioritize their existence over human well-being.
Building trust through decentralizationDecentralization is at the heart of the DKG’s effectiveness in fostering trust:
Avoiding centralized control: Traditional centralized databases can be points of failure or manipulation, especially in multi-agent scenarios. In contrast, DKG distributes control, reducing the risk of misuse or bias in AI decision-making. This decentralized approach helps build a collective, trustworthy intelligence that aligns with human values and safety. Community contribution: DKG facilitates a crowdsourced approach to knowledge, where contributions from various stakeholders can enrich the AI’s understanding of ethical and practical scenarios, further aligning AI behavior with the Three Laws. This community aspect also encourages ongoing vigilance and updates to the knowledge base, ensuring AI systems remain relevant and safe. Grow and read AI Agents’ minds with the ChatDKG framework powered by DKG and ElizaOSThe upgrade of ChatDKG marks a pioneering moment, combining the power of the OriginTrail Decentralized Knowledge Graph (DKG) with the ElizaOS framework to create the first AI agent of its kind. Empowered by DKG, ChatDKG utilizes the DKG as collective memory to store and retrieve information in a transparent, verifiable manner, allowing for an unprecedented level of interaction where humans can essentially “read the AI’s mind” by accessing its data and thought processes. This unique feature not only enhances transparency but also fosters trust between humans and AI.
The integration with ElizaOS is based on a dedicated DKG plugin, with which ElizaOS agents can create contextually rich knowledge graph memories, storing structured information about their experiences, insights, and decisions. These memories can be shared and made accessible across the DKG network, forming a collective pool of knowledge graph memories. This allows individual agents to access, analyze, and learn from the experiences of other agents, creating a dynamic ecosystem where collaboration drives network effects between memories. See an example memory knowledge graph created by the ChatDKG agent here.
Tapping into collective memory will be enhanced with strong agent reputation systems and robust knowledge graph verification mechanisms. Agents can assess the trustworthiness of shared memories, avoiding hallucinations or false data while making decisions. This not only enables more confident and precise decision-making but also empowers agent swarms to operate with unprecedented coherence and accuracy. Whether predicting trends, solving complex problems, or coordinating large-scale tasks, agents will be able to achieve a new level of intelligence and reliability.
Yet, this is only the beginning of the journey toward “collective neuro-symbolic AI,” where the synthesis of symbolic reasoning and deep learning, enriched by shared, verifiable knowledge, will redefine the boundaries of artificial intelligence. The possibilities for collaborative intelligence are limitless, paving the way for systems that think, learn, and evolve together.
Moreover, ChatDKG invites users to contribute to its memory base, growing and refining its knowledge through direct interaction. This interactive approach leverages the ElizaOS framework’s capabilities to ensure that each exchange informs the AI and enriches its understanding, making it a dynamic participant in the evolving landscape of knowledge.
Talk to the ChatDKG AI agent on X to grow and read his memory!
Bridging trust between humans and AI agents with Decentralized Knowledge Graph (DKG) and ElizaOS… was originally published in OriginTrail on Medium, where people are continuing the conversation by highlighting and responding to this story.
As 2025 approaches, supply chain trends like digital transformation, AI, sustainability, and smart logistics remain top of mind.
In this episode, James Chronowski, Vice President of Strategic Account Management at GS1 US, joins hosts Reid Jackson and Liz Sertl to explore how data quality plays a crucial role in addressing these trends. James offers practical insights for businesses to tackle emerging challenges and seize opportunities in an evolving supply chain landscape.
In this episode, you’ll learn:
The top trends shaping the supply chain industry in 2025
Why data quality and governance are essential for businesses
How to build resilient supply chains in a rapidly changing environment
Jump into the conversation:
(00:00) Introducing Next Level Supply Chain
(02:36) Current and future trends in the supply chain
(08:20) The foundational role of data governance
(11:42) How businesses can be more resilient in 2025
(13:52) James Chronowski’s favorite tech
Connect with GS1 US:
Our website - www.gs1us.org
Connect with the guest:
James Chronowski on LinkedIn
The post Results of 3rd Annual Elections to the Board of the Velocity Network Foundation appeared first on Velocity.
ABSTRACT: By grounding technical decisions in ethical values, we can create compassionate digital architectures. This article examines how core human values such as dignity, autonomy, and human rights inform the design of trustworthy digital systems to enable progressive trust, safeguard privacy, promote individual choice, and build resilient systems resistant to coercion.
As we enter 2025, I’m reflecting on a journey of decades that has been dedicated to advancing privacy, security, and human autonomy in the digital age. My body of work dates back to the 1990s, which saw my early contributions with cryptographic pioneers and my co-authorship of the IETF TLS 1.0 standard. But this year marks the 10th anniversary of the first “Rebooting Web of Trust” workshop, which was a real milestone for my leadership role in shaping secure technologies such as Self-Sovereign Identity and the W3C Decentralized Identifiers standard.
Over the past decade, my focus as a trust architect has sharpened on designing digital systems that empower individuals while respecting core values such as autonomy and human dignity. These designs play a critical role in how individuals express themselves, engage with communities, and pursue their aspirations in a world increasingly shaped by digital interactions.
Yet, this digital realm presents a dual reality. While it opens up unprecedented opportunities, it also makes us increasingly vulnerable to exploitation, coercion, and pervasive surveillance. This tension places a profound responsibility on architects of digital systems: we must ensure that technical designs are guided by deeply rooted human values and ethical principles.
Looking ahead to the next ten years, I reaffirm my commitment to these values, charting a course for the future that places human flourishing and trust at the center of technological progress. But to fulfill this commitment requires the complex answer to a simple question: how can we design systems that uphold dignity, autonomy, and human rights?
The Core Values of Autonomy & DignityWhen we design digital systems, we’re not just creating technical specifications. We’re crafting spaces where people will live significant portions of their lives. To give them the ability to truly live and excel, we must give them automony: a digital system must empower individuals to control their own destinies within this digital realm. To do so, it must provide them with tools that:
Protect their data. Exercise control over their digital presence. Ensure freedom from coercion. Cultivate trust through direct, transparent & efficient peer-to-peer interactions. Facilitate interactions built on trust and agency. Enable meaningful participation in the digital economy. Support engagement that aligns with their values and priorities. Foster resilience against systemic vulnerabilities. Operate seamlessly across jurisdictions and political boundaries(See my “Principles of Dignity, Autonomy, and Trust in Digital Systems” in the Appendix for a more extensive look at what I consider core values for digital system design.)
Providing individuals with digital autonomy is mirrored by the concept of digital dignity. A digital system that prioritizes dignity respects the individuality of its users and safeguards their right to privacy. It minimizes the data collected, provides clear and revocable consent mechanisms, and ensures that control remains in the hands of the user. A dignified system doesn’t simply protect; it fosters agency and participation, allowing individuals to thrive without fear of surveillance, discrimination, or exploitation.
Autonomy is also closely linked to the concept of trust. You must be able to know and trust your peers in order to truly have the autonomy to make meaningful decisions. This is where systems like progressive trust come in.
A system built on autonomy, dignity, and trust ultimately treats individuals as more than their administrative identities; it recognizes that individuals possess an ineffable core of self that transcends digital representation. The first principle of Self-Sovereign Identity, ‘Existence,’ upholds this kernel of individuality, affirming that any digital identity must respect and support the inherent worth of the person behind it.
To properly respect autonomy and dignity also requires careful attention to power dynamics and accountability. Distinct standards of transparency and privacy should address the power imbalances between individuals and institutions. Achieving this balance involves respecting individual privacy while enabling appropriate oversight of powerful institutions. We must protect the vulnerable while ensuring our larger administrative systems remain fair and just.
We must also address the crucial question: how do we make privacy-preserving technology economically accessible to everyone? Any autonomy-enabling digital system must balance individual and collective interests by supporting sustainable development of digital infrastructure while fostering individual economic sovereignty and resilience. We must reward contributions to shared resources, uphold autonomy and self-determination, and ensure equitable access to rights-preserving technologies. By protecting individual freedoms and enabling fairness, privacy can ultimately be a tool that encourages participation regardless of economic means.
Decentralized identity wallets offer an example of how to embody the characteristics of autonomy, dignity, and trust, while also considering issues such as privacy, balance, and accessibility. They empower individuals to securely prove their credentials (such as educational achievements or professional certifications) directly to peers, without relying on central authorities that could arbitrarily deny their accomplishments. Consider Maria, a small business owner living in a vibrant but economically challenged favela neighborhood in Buenos Aires, Argentina. Using a self-sovereign, decentralized identity wallet provided by the city, she is able to secure microloans without compromising her privacy, a triumph for both dignity and autonomy.
As for how these core values transform into the design principles of decentralized identity wallets: that’s the next question to address.
From Values to Design PrinciplesThe translation of the core values of autonomy, dignity, and trust into concrete design principles shapes every aspect of trust architectures I build and guides me to specific technical choices:
Cryptographically secure, self-certifying identifiers that operate independently of central authorities. Local or collaborative key generation and management to keep control in users’ hands. Peer-to-peer protocols that resist centralized rent-seeking and walled gardens. Offline-first capabilities to prevent connectivity from becoming a point of coercion. Data minimization by default. Choices for elision and redaction to control what individuals share. Cryptographic selective disclosure to prevent unwanted correlation and tracking. Revocable permissions to ensure users retain ongoing control over their information. Zero-knowledge proofs or other systems that can balance privacy and accountability without enabling bad actors. Decentralized architectures, not as an ideological preference, but as a practical necessity.The importance of these protections isn’t theoretical. My work examining sensitive data — including wellness, educational credentials, financial transactions, and identity documentation — has revealed how seemingly benign information can threaten human rights when misused. Health data can enable discrimination or coercion. Educational records can create permanent, unchangeable markers that limit opportunities. Financial and identity data can be weaponized to exploit or disenfranchise individuals.
A values-driven design can therefore be seen as not just an abstract focus on ideals such as autonomy, but protection against real-world harms. The rights to be forgotten, to correct errors, and to recover from systemic or administrative injustices ensure fairness in digital interactions. The ability for an individual to selectively share aspects of their identity protects from being reduced to digital records or confined to singular contexts.
From Design Principles to EducationImplementing human-centric design patterns reveals another challenge: helping developers to understand not just the technical complexity, but the human purpose behind each design choice. Developers must grasp not only how their systems operate but they must think critically about why their design decisions matter for privacy, autonomy, and dignity.
While technical resources such as documentation and tutorials are indispensable for this education, true progress depends on fostering a compasionate culture where developers internalize value-driven imperatives. This has led me to prioritize the cultivation of decentralized developer ecosystems rooted in collaboration, open development standards, and shared learning. I’ve done this through a variety of means:
Workshops that convene developers, policymakers, and advocates to share insights, collaborate, and explore innovative approaches. Hackathons and Sprints addressing pressing challenges in digital trust, enabling participants to co-create solutions in hands-on environments. Regular Developer Meetups for discussing current challenges, sharing practical experiences, and aligning on future roadmaps. Peer Review and Collaboration Forums to ensure transparency, accountability, and robust feedback in the development processes. Cross-Organization Coordination to facilitate collaborative projects, share resources, and distribute financial and time-related investments such as security reviews. Ecosystem Building to design decentralized solutions that balance individual empowerment with collective benefit, ensuring that all contributors — users, developers, and communities — derive meaningful value and that mutual respect is cultivated through shared goals and open participation. Mentorship Programs to guide emerging developers in adopting values-driven approaches, fostering ethical practices from the outset of their careers. Advocacy Efforts that include collaborating with policymakers and regulators to define a techno-social contract that upholds human dignity, ensures equitable and compassionate digital rights, and protects the interests of the vulnerable.With this decentralized, collaborative approach to education, no single entity controls the evolution of these technologies. Instead, innovation is fostered across a diverse network of developers, building resilience into these systems and ensuring that solutions remain adaptable, inclusive, and accessible. This cooperative spirit reflects the very principles of autonomy, compassion, and inclusivity that underpin trustworthy digital systems.
From Education to ImplementationAs communities evolve from educational groups to implementation groups, forums and discussions continue to expand the community and allow us to address the broader societal implications of technical choices. Foundational principles should follow.
The Ten Principles of Self-Sovereign Identity is an example of a set of foundational principles that directly evolved from discussion at an educational workshop (RWOT2). The Gordian Principles and Privacy by Demand are other examples of core principles that evolved out of earlier discussions. Principles such as these form a bedrock for the values we will work to embed in actual implementations.
Code reviews and project evaluations should then include these principles — and more generally ethical alignment — as a key criterion. They’re not just about technical correctness! By embedding values into every stage of development, we ensure that systems are designed to empower individuals, not exploit them.
How can we manage the critical balance between transparency for accountability and privacy for individuals? How do we address power dynamics and ensure systems protect the rights of the vulnerable while holding powerful entities accountable? Ultimately, how do we prioritize both user autonomy and security in decisions around data storage, key management, or cryptographic algorithms? These are questions that should both arise and be addressed when considering a full education-to-implementation pipeline that is based on collaboration and the consideration of values.
Ultimately, implementing systems that respect dignity and autonomy demands a new kind of techno-social contract. This contract must bridge multiple realms:
The technical capabilities that make solutions possible. The cultural shifts that make them acceptable. The economic incentives that make them sustainable. The political will that makes them viable. The contractual & legislative agreements that makes them durable.This comprehensive approach will serve both individual autonomy and our collective commons.
By ensuring that digital trust and human dignity remain at the core of technological progress, we build systems that serve as a foundation for a more equitable, humane, and resilient digital future. The result is implementations that transcend technical excellence by instilling a sense of stewardship among developers. They become not just the creators of secure systems but also champions of the communities these systems serve.
From Implementation to DeploymentAny framework to support values such as autonomy, dignity, and trust must be holistic in its approach.
Technical standards and specifications must harmonize with cultural norms and social expectations. Economic models must simultaneously foster individual resilience and collective benefits, ensuring that privacy and autonomy remain accessible to everyone, and don’t become luxuries available only to the wealthy. Cultural norms and legislative efforts must go beyond surface-level privacy protections, addressing both the technical realities and human needs at stake. Most importantly, technical and political discourse must evolve to recognize digital rights as fundamental human rights. This paradigm shift would enable policies that support compassionate decentralized approaches while holding powerful actors accountable to the communities they serve.Nurturing the collaborative ecosystems plays a central role in this transformation. We must foster cultures of ethical awareness not just among developers but across society. This means supporting implementers and maintainers who understand not just the “how” of our systems, but the “why”. It means engaging leaders who grasp both technical constraints and human needs and creating sustainable economic models that reward contributions to the commons while protecting individual rights.
Legal deployment has always been one of the trickiest challenges in popularizing a system that supports individual autonomy, but the concept of Principal Authority presents a promising foundation, courtesy of Wyoming’s digital identity law. It goes beyond the traditional frameworks of property and contract law, which, while useful, are insufficient in addressing the unique challenges of digital identity.
Property law focuses on ownership and control and contract law governs agreements between parties, but neither fully captures the dynamic, relational nature of digital representations or the need for individual agency in decentralized systems. Principal Authority, grounded in Agency Law, functions much like the relationship between a principal and an agent in traditional legal contexts. For instance, just as an agent (like a lawyer or real estate agent) acts on behalf of a principal while preserving the principal’s control, Wyoming’s digital identity law ensures that individuals retain ultimate authority over any actions or representations made on their behalf in the digital space. This legal framework acknowledges human agency — not mere ownership or contractual consent — as the primary source of legitimate authority. The result is a modern recognition of individual sovereignty, and therefore autonomy, that still fosters collaboration and commerce in the increasingly interconnected digital realm.
But, even if Principal Authority does prove a useful tool, it’s just one tool in a whole toolkit that will be necessarily to successfully deploy rights-supporting software into society.
ConclusionMy responsibility as a trust architect is not simply to build systems that work, but to build systems that work for humanity. This requires a steadfast commitment to values, a willingness to navigate difficult trade-offs, and a relentless focus on aligning design principles with human needs.
The technical challenges of implementing values-driven design are significant, but they’re challenges worth solving. When we build systems that respect human rights and dignity, we create digital spaces that enhance rather than diminish human flourishing.
As developers, policy makers, or advocates, we hold the power to embed human values into every line of code, every standard, and every policy. As we build tomorrow’s digital ecosystems, we must therefore ask: What can I do to make trust and dignity the foundation of our systems?
To answer that question in a positive way will ultimately require a multi-stakeholder effort where technologists, policy makers, and civil society collaborate to uphold principles of equity, inclusion, and transparency in all aspects of digital architecture, down the entire linked chain from values to design to education to implementation to deployment.
I hope you’ll be part of that undertaking.
Appendix 1: Principles of Dignity, Autonomy, and Trust in Digital SystemsWhile working on this article, I put together my own principles for dignity, autonomy, and trust in digital systems. As with my self-sovereign principles of a decade ago, I am offering these up for discussion in the community.
1. Human Dignity. Design systems that prioritize and respect the inherent dignity of every individual. Embed privacy protections, minimize data collection, and provide clear, revocable consent mechanisms that align with user empowerment. Protect individuals from harm while fostering compassionate digital environments that promote trust, human flourishing, and technological progress aligned with human-centric values, actively considering potential societal impacts and unintended consequences. 2. Autonomy & Self-Determination: Empower individuals to control their digital identities and make decisions free from coercion or undue influence. Enable them to manage their interactions, transact freely, preserve their sovereignty, act as peers not petitioners, and assert their rights through decentralized, compassionate, user-controlled systems. 3. Privacy by Design (& Default): Embed robust privacy protections into every system, implementing data minimization, selective disclosure, anti-correlation, and cryptographic safeguards as default practices. This ensures that users retain control over their information and remain shielded from tracking, correlation, and coercion. 4. Resilience Against Exploitation: Architect systems to withstand adversarial threats and systemic vulnerabilities. Leverage decentralization, cryptographic protections, and offline-first capabilities to empower users even in hostile and adversarial environments and to ensure autonomy remains intact under pressure. 5. Progressive Trust: Design systems that reflect the natural evolution of trust, enabling selective and intentional information sharing. Foster trust gradually through mutual engagement, avoiding premature commitments, unnecessary reliance on intermediaries, or imposed full disclosure. 6. Transparency & Accountability: Hold powerful institutions accountable while safeguarding individual privacy. Balance transparency with confidentiality to mitigate power imbalances, protect the vulnerable, and ensure justice and fairness in digital interactions. Ensure that innovation and system development prioritize fairness and compassionate considerations, holding powerful institutions accountable for societal impacts. 7. Interoperability: Foster systems that are interoperable across cultural, legal, and jurisdictional boundaries. Promote inclusivity by prioritizing open standards, decentralized infrastructures, and accessible tools that serve diverse communities while avoiding exclusivity or centralized gatekeeping. 8. Adaptive Design: Incorporate insights from Living Systems Theory, Ostrom’s Commons, and other governance and design models to build architectures that are dynamic, resilient, and capable of evolving alongside societal and technological changes. Emphasize adaptability through iterative growth, collective stewardship, and interoperability, balancing stability with flexibility to support sustainable and inclusive digital ecosystems. 9. A Techno-Social Contract: Bridge technical capabilities with cultural, economic, and legislative frameworks to create a sustainable, human and civil rights-preserving digital ecosystem. Recognize digital rights as fundamental human rights and align systems with shared values of autonomy, dignity, and collective benefit. 10. Ethics: Cultivate a culture of ethical awareness, critical thinking, and collaboration among developers, policymakers, and users. Ensure technical decisions align with principles of trust and dignity by embedding education, mentorship, and a commitment to shared responsibility in the development process. Encourage innovation that is mindful of societal impacts, fostering a development ethos that prioritizes responsibility and safeguards against unintended consequences. Appendix 2: Use Cases for Values DesignsValues affect all of my designs. Following is some discussion of how it’s influenced my work on self-sovereign identity and progressive trust.
Self-Sovereign IdentityThe conviction that technical designs must be built on human values came into sharp focus for me in 2016 when I authored the 10 Principles of Self-Sovereign Identity. These principles were not born from technical specifications alone but from a deep commitment to dignity, autonomy, and human rights. Over time, those values have guided the development of technologies such as Decentralized Identifiers (DIDs), Verifiable Credentials (VCs), and the DIDComm protocol for secure, private communication. They have also influenced broader thinking around cryptographic digital assets such as Bitcoin. I have come to see these values not as abstract ideals but as the very foundation of trust itself: principles that must underpin every digital system we create.
My principles of Self-Sovereign Identity also had a strong historical basis: they were built on a deep historical and philosophical foundation. The concept of sovereignty has evolved over centuries — from feudal lords to city-states to nations — consistently reflecting a balance between autonomy and interconnection. When I wrote about the principle of “Control”, it was not about advocating absolute dominion but about framing sovereignty as the right to individual agency and prosperity, much like medieval cities, which preserved their independence while flourishing within broader networks of trade and diplomacy.
This understanding was deeply influenced by Living Systems Theory, which shows how every entity maintains its autonomy through selective boundaries while remaining part of a larger ecosystem. Just as a cell’s membrane allows it to control what passes in and out while still participating in the larger organism, digital identity must enable both individual autonomy and collective participation. This biological metaphor directly informed principles such as “Existence” and “Persistence,” which recognize that identity must be long-lived but also able to interact with its environment, and “Access” and “Portability”, which define how identity information flows across boundaries.
The principles also reflect Ostrom’s insights about managing common resources as well as feminist perspectives on sovereignty that emphasize agency over control. When I wrote about the principles of “Consent” and “Protection”, I was describing the selective permeability of these digital boundaries—not walls that isolate, but membranes that enable controlled interaction. “Interoperability” and “Minimization” similarly emerged from understanding how sovereign entities must interact while maintaining their independence and protecting their core rights.
These concepts culminate in the final SSI Principles such as “Transparency,” which balances individual autonomy with collective needs, and “Portability,” which ensures that identities can move and evolve just as living systems do. Each principle reflects this interplay between values and technical implementation, creating a framework where digital sovereignty serves human dignity. They weren’t meant to be an endpoint but rather a starting point for an evolving discussion about sovereignty in the digital age — one that continues to guide our work as we push the boundaries of what’s possible in digital identity, ensuring our innovations prioritize human needs rather than subordinating them to technology.
The technical complexity required to implement such systems is significant, but it serves a deeply human purpose: the ability to build autonomy and trust.
Progressive TrustTrust is not static; it evolves over time — a concept I describe as progressive trust. This principle reflects how trust naturally develops between people and organizations, both in the physical and digital worlds. Relationships are built incrementally, through selective and intentional disclosures, rather than being imposed upfront or dictated solely by third-party intermediaries. This gradual evolution is essential for fostering genuine connections while mitigating risks.
I discovered this concept through years of observing how people actually build relationships. For instance, when meeting someone at a conference, we don’t immediately share our life story. Instead, we begin with small exchanges, revealing more information as comfort, context, and mutual understanding grow. Digital systems must mirror this natural evolution of trust, creating environments that respect psychological needs and empower individual agency.
A well-designed system transforms these ideas about progressive trust into deployable systems by enabling users to disclose only what is necessary at each stage, while retaining the ability to refine or revoke permissions as relationships deepen, change, or dissolve. This flexibility demands advanced technical solutions, such as:
Sophisticated cryptographic protocols that enable selective and intentional disclosure. Relationship-specific identifiers to ensure contextual privacy. Mechanisms to prevent unwanted tracking or correlation. Tools that balance transparency with security, safeguarding trust while avoiding vulnerabilities that could undermine it.The technical complexity required to implement such systems is significant, but it serves a deeply human purpose: enabling individuals to build trust incrementally, naturally, and on their own terms.
Knowing the values we are aligning with from the start helps to define this sort, even (as with progressive trust) when it’s hard. The result is an architecture that not only reflects the organic nature of human relationships but also upholds autonomy, fosters confidence, and protects against coercion or exploitation.
This came in the mail today:
Everything they list is something I don’t want to do. I’d rather just accumulate the miles. But I can’t, unless I choose one of the annoyances above, or book a flight in the next three months.
So my customer journey with American is now derailed.
There should be better ways for customers and companies to have journeys together.
Hmm… Does United have one?
Here’s a picture of my customer journey with United Airlines, as of today:
I’m also a lifetime member of the United Club, thanks to my wife’s wise decision in 1990 to get us both in on that short-lived deal.
Premier Platinum privileges include up to three checked bags, default seating in Economy Plus (more legroom than in the rest of Economy), Premium lines at the ticket counter and Security, and boarding in Group One. There are more privileged castes, but this one is a serious tie-breaker against other airlines. Also, in all our decades of flying with United, we have no bad stories to tell, and plenty of good ones.
But now we’re mostly based in Bloomington, Indiana, so Indianapolis (IND) is our main airport. (And it’s terrific. We recommend it highly.) It is also not a hub for any of the airlines. The airline with the most flights connecting to IND is American, and we’ve used them. I joined their frequent flier program, got their app, and started racking up miles with them too.
So here is one idea, for every airline: having respect for one’s established status with other airlines means something. Because that status (or those stati) are credentials: They say something about me as a potential passenger. It would be nice also if what I carry, as an independent customer, is a set of verifiable preferences—such as that I always prefer a window seat, never tow a rolling bag on board (I only have a backpack), and am willing to change seats so a family can sit together. Little things that might matter.
I bring all this up because fixing “loyalty” programs shouldn’t be left up only to the sellers of the world. They’ll all do their fixes differently, and they’ll remain deaf to good input that can only come from independent customers with helpful tools of their own.
Developing those solutions to the loyalty problem is one of our callings at ProjectVRM. I also know some that are in the works. Stay tuned.
The EU’s Digital Operational Resilience Act (DORA) is set to take effect in January 2025. its aim is to ensure that companies and institutions active in the EU financial sector […]
The post Are you ready for the new EU DORA regulations? appeared first on Kantara Initiative.
The FIDO Alliance’s Seoul Public Seminar was held on December 10, 2024, at the SK Telecom Pangyo Office. The theme for this milestone event was Unlocking a Secure Tomorrow with Passkeys and the event attracted nearly 200 attendees. The seminar gave professionals a chance to share the latest developments and implementations of simpler and stronger online authentication technology with passkeys.
Watch the Recap VideoThe seminar featured a dynamic mix of global and local case studies and offered a comprehensive overview of Passkey/FIDO and FDO (FIDO Device Onboard) implementations. Here are some key highlights:
FIDO Alliance Update: Andrew Shikiar (Executive Director & CEO of the FIDO Alliance) announced the launch of Passkey Central, a resource hub offering guidance on implementing passkeys for consumer sign-ins. The site is now available in Korean, Japanese, and English. What’s New with Passkeys on Google Platforms?: Eiji Kitamura (Developer Advocate at Google) discussed recent passkey advancements, including Android’s Credential Manager API and broader passkey support on Google platforms. From Passwords to Passkeys: The TikTok Passkey Journey: XK (Sean) Liu (Technical Program Manager at TikTok) shared how the TikTok platform adopted passkeys for both enterprise and consumer services. Secure Smart TV Authentication with Passkeys: Min Hyung Lee (Leader of the VD Business Security Lab at Samsung Electronics) demonstrated how passkeys enhance smart TV user authentication and outlined the future for this technology. FIDO in eCommerce: Mercari’s Passkey Journey: Naohisa Ichihara (CISO at Mercari) detailed the company’s motivations, challenges, and strategies for mitigating phishing risks through passkey adoption within the C2C marketplace.The 2024 Seoul Public Seminar also featured an exciting and interactive segment: the FIDO Quiz Show. Designed to engage attendees while reinforcing key learnings, the quiz brought an additional layer of fun and competitiveness to the event.
How it worked:
Session Pop Quizzes: After each seminar session, key takeaways were tested through pop quizzes. Attendees who answered correctly were rewarded with FIDO Security Keys, generously supported by Yubico.
Real-Time Quiz Show: At the end of the event, a live quiz show engaged all attendees. By scanning a QR code, participants could join in and compete for prizes. Eunji Na from TTA emerged as the top scorer and won a Samsung Galaxy Smartphone!
Think you know FIDO Alliance and passkeys? Test your knowledge with the same 15 quiz questions (in Korean) by scanning the QR code in the image below.
The seminar gained significant local media attention from outlets such as IT Daily, DailySecu, Byline Networks, Datanet, BoanNews, eDaily, and Korea Economic Daily. Coverage highlighted the launch of Passkey Central, emphasizing its potential to accelerate passkey adoption and reduce reliance on passwords.
We extend a heartfelt thanks to all speakers, including Kieun Shin and Hyungchul Jung (Co-Vice Chairs of the FIDO Alliance Korea Working Group), Heungyeol Yeom (Emeritus Professor at Soonchunhyang University), Jaebeom Kim (TTA), Yuseok Han (AirCuve), Heejae Chang and Keiko Itakura (Okta), Junseo Oh (Ideatec), and Simon Trac Do (VinCSS) for their invaluable contributions.
We also express our gratitude to our sponsors, whose support made this year’s Seoul Public Seminar a resounding success.
Proudly Sponsored by:Bias in biometric identity systems still exists, but it is manageable, argues Andrew Shikiar at the FIDO Alliance
When you unlock your smartphone, open your bank app, or approve a purchase on your laptop, you are using biometric authentication. It is such an unconscious part of our daily lives that if you blink, you might miss it.
It’s no wonder that biometrics are popular with consumers—they’re convenient and secure. Recent FIDO research found that consumers want to use biometrics to verify themselves online more, especially in sensitive use cases like financial services, where one out of two people said they would use biometric technology (48%). In fact, in the FIDO Aliance’s latest online barometer survey, consumers ranked biometrics as the most secure and preferred way to log in by consumers.
But for consumers, governments and other implementers, there is still a lingering ‘elephant in the room’ that continues to disrupt adoption: bias.
Should we worry about bias in biometrics?
FIDO Alliance’s research, Remote ID Verification – Bringing Confidence to Biometric Systems Consumer Insights 2024, found that consumers are concerned about bias in biometric facial verification systems– while the majority of consumers (56%) felt confident face biometrics systems could accurately identify individuals, a number still had concerns around discrimination present in some systems.
Concern surrounding the accuracy of biometric systems in processing diverse demographics has been developing in recent years. In the UK in 2021, for example, Uber drivers from diverse ethnic backgrounds took legal action over claims its software had illegally terminated their contracts as its software was unable to recognise them.
While the struggle of Uber drivers is just one example that underscores the issue, this problem is affecting people of colour and other underrepresented demographics more broadly—FIDO’s research found that one in four respondents feel they experience regular discrimination when using automated facial biometric systems (25%).
Feelings of discrimination and bias in facial recognition systems impact the entire user experience and erode faith in the technology overall. Half of British consumers in the survey said they would lose trust in a brand or institution if it were found to have a biassed biometric system, and 22% would stop using the service entirely.
It’s clear why organisations like governments and banks would worry about these hard-hitting reputational and trust risks. Despite biometrics being widely accepted as a more convenient and highly secure technology, the small number of systems that aren’t as accessible are leaving an air of concern that is slowing down more mainstream adoption.
Addressing bias in facial verification
The most important thing to note is that not all biometric systems are created equal. Currently, testing levels are done on a case-by-case basis for each organisation, which is both costly and time-consuming, with varying definitions of what “good” looks like.
Based on proven ISO standards and developed by a diverse, international panel of industry, government, and identity experts, FIDO Alliance’s new Face Verification Certification program brings the industry’s first independent certification to market to build trust around biometric systems’ performance.
The certification assesses a face verification system’s performance across different demographics, including skin tone, age, and gender, in addition to far more wide-reaching security and performance tests.
The intensive security and liveness testing also verify that a provider’s face verification system can accurately confirm identities are real and authenticating in real-time, keeping threats like identity theft and deepfakes at bay. This is especially important for the most common use cases of face verification, like creating secure accounts, authenticating users, recovering accounts, and resetting passwords.
The beauty of independent certification is it sends a clear signal to consumers, potential clients, and auditors that the technology has been independently tested and is ready for both commercial and government use. It’s about building trust and showing that the provider takes security and fairness seriously.
More broadly, certification and independent global testing spark innovation and boost technological adoption. Whether you’re launching an identity verification solution or integrating it into regulations, open standards and certification provide a clear performance benchmark. This streamlines efforts, boosts stakeholder confidence and ultimately enhances the performance of all solutions on the market.
The future of identity
As the way we verify digital identities keeps evolving and demand to prove who we are remotely increases, biometric systems must be independently verified and free from bias. All technologies rolled out to this scale need to be fair and reliable for everyone.
The FIDO Alliance’s program demonstrates solution providers are serious about making sure biometric identity verification technologies are trustworthy, secure, and inclusive for all users. It’s like having a gold star or a seal of approval that says, “Hey, you can trust this system to be fair and safe.”
Biometrics for online identity verification is not just a promising concept; it’s rapidly becoming a practical necessity in today’s increasingly digital world. They’re ready for implementation across various industries. With independent certification, organisations can jump over the final hurdle to widespread adoption, empowering a future of more seamless, digital and remote identity.
Modern technology makes starting an online business easy. However, that also means stiffer competition.
How can aspiring entrepreneurs succeed in the world of e-commerce? In this episode, Jesse Ness, at Ecwid by Lightspeed, joins hosts Reid Jackson and Liz Sertl to discuss the essential steps and common pitfalls of starting and growing an online business. They discuss ways high-quality imagery, detailed product descriptions, and social media engagement can help your store stand out. Jesse also shares insights on emerging market trends like live selling and community engagement.
In this episode, you’ll learn:
How storytelling helps brands stand out in a crowded e-commerce market
The first steps to setting up a successful online store
Tips to overcome growth plateaus and how to scale your business effectively
Jump into the conversation:
(00:00) Introducing Next Level Supply Chain
(01:43) Online selling with Ecwid
(04:02) How to set up an online store
(08:27) Share your brand story
(12:11) Why entrepreneurs give up too soon
(17:10) The rise of live selling and other e-commerce trends
(22:16) Jesse Ness’ favorite tech
(24:34) Using AI to enhance daily life
Connect with GS1 US:
Our website - www.gs1us.org
Connect with the guest:
Jesse Ness on LinkedIn
FIDO passkey adoption doubles in 2024 as major firms opt for passwordless log-in
Passkeys are a biometric security trend to watch in 2025. The FIDO Alliance themed its 11th annual FIDO Tokyo Seminar on how passkey adoption is accelerating, with presentations from Google, Sony Interactive Entertainment, Mastercard, and other organizations joining the journey to password-free living. Microsoft has confirmed its advice on how to make people love passkeys – as it sweeps aside a major vulnerability that exposed 400 million Outlook 365 users.
Major tech brands drive mainstreaming of passkey account log-ins
In 2024, Amazon made passkeys available to 100 percent of its users and has seen 175 million passkeys created for sign-in to amazon.com globally. Google says 800 million Google accounts now use passkeys, with more than 2.5 billion passkey sign-ins over the past two years and sign-in success rates improving by 30 percent. Sony adopted passkeys for the global Playstation gaming community and saw a 24 percent reduction in sign-in time on its web applications.
Hyatt, IBM, Target and TikTok are among firms that have added passkeys to their workforce authentication options. More credential management products offering passkey options means more flexibility for consumers.
Japan joins passkey party in private sector, academia
The Japanese market showed a notable turn toward passkeys, with Nikkei, Nulab and Tokyu Corporation among firms embracing passwordless authentication technology. Nikkei will deploy passkeys for Nikkei ID as early as February 2025. Tokyu Corporation says 45 percent of TOKYU ID users have passkeys. And Nulab announced a “dramatic improvement in passkey adoption.”
Academia is helping drive innovation, with teams from Keio University and Waseda University winning acknowledgement for their research and prototypes at a slew of hackathons and workshops.
And FIDO, of course, is there to offer support, now offering its Passkey Central website resource on passkey implementation in Japanese, so that Japanese companies can take better advantage of its introductory materials, implementation strategies, UX and design guidelines and detailed roll-out guides.
The FIDO Japan Working Group, which includes 66 of the FIDO Alliance’s member companies, is now in its 9th year of working to raise passkey awareness in the country.
In this episode of the Trust Issues podcast, host David Puner sits down with Andrew Shikiar, the Executive Director and CEO of the FIDO Alliance, to discuss the critical issues surrounding password security and the innovative solutions being developed to address them. Andrew highlights the vulnerabilities of traditional passwords, their susceptibility to phishing and brute force attacks, and the significant advancements in passwordless authentication methods, particularly passkeys. He explains how passkeys, based on FIDO standards, utilize asymmetric public key cryptography to enhance security and reduce the risk of data breaches.
The conversation also covers the broader implications of strong, user-friendly authentication methods for consumers and organizations, as well as the collaborative efforts of major industry players to make the internet a safer place. Additionally, Andrew highlights the importance of identity security in the context of these advancements, emphasizing how robust authentication methods can protect personal and organizational data.
Tune in to learn about the future of authentication and the steps being taken to eliminate the reliance on passwords.
The Swiss Parliament has resolved all outstanding differences between the National Council and the Council of States regarding the Electronic Identity Act (BGEID), paving the way for a formal final vote scheduled for December 20, 2024.
The implementation of SWIYU, encompassing both the electronic identity and its underlying trust infrastructure, has a potential to establish an open, interoperable ecosystem for digital credentials. This framework can be a solid foundation for the secure exchange of authentic data, thus fostering trustworthiness across digital applications in public administration, the economy, and civil society. Key principles of SWIYU and the e-ID include privacy by design, data minimization, user-centricity, and a commitment to openness and collaboration. We, at DIDAS, expect SWIYU, when fully implemented, to serve as an important building block promoting confidence in the digital realm, boosting economic growth and digital inclusion.
The new Swiss electronic identity (e-ID) system takes a completely different approach compared to the model that was rejected by voters in 2021. Unlike the earlier proposal, which handed the responsibility for issuing and managing digital identities to private companies, the new system is entirely state-operated. This ensures that the government, as a public entity, is responsible for issuing e-IDs and maintaining the necessary infrastructure. This change directly addresses the privacy and security concerns raised previously, making societal control easier. The updated framework is designed around user empowerment, with privacy by design and data minimization as fundamental principles, ensuring transparency and building confidence in its use.
What’s truly transformative is the system’s decentralized architecture, drawing its inspiration from Self-Sovereign Identity (SSI) principles. This gives individuals control over their own digital identities, and ability to decide what information to share with 3rd parties, such as service providers. The design aligns with the “trust diamond” framework, which organizes four essential roles: the government as the issuer, individuals as the holders, service providers as the verifiers, and a governance framework that ensures everything operates within clear, enforceable and trusted rules. This structure creates a reliable and secure ecosystem for digital identity, addressing shortcomings of the previous E-ID vision resulting in a user-centric, privacy-preserving approach.
DIDAS is exceptionally proud to have made a number of key contributions to Switzerland’s efforts, ensuring the system reflects fundamental Swiss values such as federalism, direct democracy, self-determination, and autonomy. Since its inception in 2020, DIDAS has been a strong advocate for SSI principles, emphasizing user control over personal data and a need for a secure, privacy-preserving digital ecosystem. It has been an integral part of our vision that a digital trust ecosystem must safeguard privacy but also enable economic value creation.
Early Advocacy and Strategic Vision
In October 2020, DIDAS was established with the primary goal of positioning Switzerland as a leader in developing and implementing privacy-preserving technologies, services, and products related to digital identity and electronically verifiable data. This vision laid the groundwork for a digital trust ecosystem that emphasizes data sovereignty and identity management based on tight alignment with the SSI principles.
Early Advocacy for Self-Sovereign Identity Principles
In December 2021, DIDAS published an explainer on SSI, outlining its core principles and the association’s commitment to establishing a viable and thriving SSI ecosystem. The DIDAS initiative aimed from the start to educate stakeholders and promote the adoption of SSI principles and frameworks within Switzerland’s digital infrastructure, for a more privacy preserving and frictionless digital future.
Contributing to the Dialog around National e-ID Legislation
By October 2021, DIDAS had provided extensive commentary on Switzerland’s target vision for the e-ID system. The association advocated for an “ecosystem of digital proofs”, where the e-ID would serve as one credential among many, enabling both governmental and private entities to issue other types of credentials. This approach aimed to create a flexible and future-proof foundation for digital interactions in Switzerland.
In December 2021, following a public consultation, the Swiss Federal Council decided to orient the implementation of the future e-ID system based on Self-Sovereign Identity (SSI) principles. DIDAS welcomed this decision, recognizing it as a commitment to a decentralized solution architecture that prioritizes maximum privacy protection and positions the e-ID as a cornerstone of a broader ecosystem of digital credentials. “Ambition level 3” was anchoring the approach to build an ecosystem of(business-domain-) ecosystems in which, in addition to the E-ID, other verifiable credentials can be exchanged securely and reliably.
Promoting Technological Innovation
In its early stages, DIDAS members established an open sandbox environment to facilitate the development and testing of Self-Sovereign Identity (SSI) solutions. This sandbox provided a controlled setting where developers and organizations could experiment with SSI technologies, enabling the creation of interoperable and secure digital identity systems. By offering access to resources such as repositories and live demonstrations, DIDAS’s sandbox played a crucial role in iteratively advancing knowledge within Switzerland’s E-ID movement.
DIDAS has consistently emphasized the importance of advanced digital signature technologies to enhance the Swiss e-ID framework. Following DIDAS’ statement in response to the E-ID technology discussion paper, and its recommendation for Scenario “A” as a feasible technical starting point in February 2024, in March 2024, the association proposed to adopt the concept of dual signatures. The technology approach to bridging the gap between the well established, but less feature rich cryptography and the new, but less known, techniques. Supported by the US Department of Homeland Security, this technique involves attaching multiple digital signatures to a single payload, each offering distinct security or privacy features. This methodology enhances agility and robustness, accommodating various cryptographic standards and privacy needs without compromising data integrity.
Advocating for Economic Value Creation beyond the societal value of a self-sovereign E-ID
Beyond technological contributions, DIDAS has been a committed advocate for leveraging the e-ID programme by the Swiss Confederation, to establish a digital trust and authentic data exchange ecosystem that creates sustainable economic value. The Association further envisioned this future ecosystem removing friction in B2B and cross border processes, by enabling higher levels of assurance in automation, significantly reducing risk of fraud, simplifying the management of compliance as well as allowing for the proliferation of digital trust-based businesses and innovations. On the Basis of the DIDAS Sandbox, Members have been experimenting around common use cases to explore ecosystem value creation and are looking forward to support issuers, holders and verifiers as well as technology vendors, to further experiment with the confederation’s public beta infrastructure in early 2025.
In January 2024, during the World Economic Forum’s Annual Meeting in Davos, we collaborated with digitalswitzerland to co-organize the “Digital Trust” session at the digitalswitzerland Village. This event convened over 50 speakers and panelists, including industry leaders and policymakers, to discuss the critical role of digital trust in today’s interconnected world.
In September 2024, at the event organized by the State Secretariat of International Finance (SIF) and the Swiss Financial Innovation Desk (FIND) at the Swiss Embassy in Singapore, we had the privilege of moderating and contributing to discussions on digital trust, emphasizing the importance of verifiable data and trust frameworks in global financial ecosystems. Our insights have also been shaping a soon-to-be-published paper, where DIDAS explores key principles and practical strategies to advance digital trust.
Collaborative Efforts and Future Outlook
We strongly believe that DIDAS’s collaborative approach, engaging, as a non for profit, independent association, with government bodies, private sector stakeholders, and civil society, has been instrumental in shaping Switzerland’s digital identity efforts. The association’s commitment to a pragmatic, principle-based, iterative, and inclusive methodology has ensured that the SWIYU’s vision aligns with both national interests and international standards.
As Switzerland prepares the final approval of the e-ID legislation on December 20, 2024, the foundational work of DIDAS continues to be important. We have a lot of work ahead of us to support the adoption of the E-ID and its mechanisms of exchanging authentic data. We further see our role in helping to increase the fluency of business leaders and innovators in applying these mechanisms. We’ll use the combined expertise of our members and our energy to promote and further enhance the key aspects of Ambition Level 3 governance and cross-ecosystem interoperability. Continued experimentation and dialogare unavoidable, in order to uncover and realize business value of this emerging Trust Infrastructure.
We are also proud to co-organize DICE (Digital Identity unConference Europe) in collaboration with Trust Square and the Internet Identity Workshop (IIW) rooting in Mountain View, California. DICE first launched in 2023 and exceeded expectations with 160 expert participants contributing to dynamic discussions. The second DICE in 2024 was a milestone, opened by Federal Councilor Beat Jans, underscoring the importance of these participatory conferences and their contribution to the development of the E-ID Framework and the Swiss Trust Infrastructure. DICE fosters joint learning, evolves collective thinking, and accelerates the adoption of digital identity and verifiable data solutions. In 2025, two events are planned, further advancing open dialogue as a cornerstone for collaboration for authenticity and trust in the digital realm.
The association’s vision of a secure, adaptable, and authentic data ecosystem built on SSI principles underlines its dedication to a sustainable digital environment that favors privacy and security, while enabling significant economic value creation.
We look forward to continuing to create positive impact with all of our members, partners and other stakeholders.
Cordially, The DIDAS Board
Further Articles and details on contributions in the DIDAS Blog
(an english translation can be found at the end)
La Fondation Human Colossus contribue à la conférence publique du mercredi 28 janvier 2025 organisée par la Faculté de droit, des sciences criminelles et d’administration publique (FDCA) de l’Université de Lausanne (inscription requise).
À l'ère du numérique, la liberté de choix est profondément influencée par la manière dont les données sont collectées, partagées et utilisées. La liberté de choisir est un concept fondamental qui revêt une importance particulière avec l'avènement des technologies numériques. Cette liberté de choix est intimement liée à la notion de sphère privée. Avec Internet et autres réseaux nous sommes confrontés à un large éventail de choix dans tous les aspects de notre vie quotidienne. Que ce soit dans le domaine des achats en ligne, des réseaux sociaux, des services bancaires en ligne ou dans le domaine de la santé, nous sommes constamment sollicités pour prendre des décisions aussi bien au niveau personnel que professionnel. Avec les outils d’intelligence artificielle qui s'immiscent dans notre quotidien notre sphère privée est-elle encore suffisamment protégée pour garantir autodétermination informationnelle ?
En prenant l’exemple de la médecine personnalisée dans le contexte de la liberté de choix à l'ère numérique, il est clair que l'accès aux données personnelles de santé et leur contrôle sont cruciaux. La médecine personnalisée s'engage à fournir des diagnostics et traitements individualisés. Cela nécessite des outils technologiques pour gérer les informations de santé de manière proactive afin de garantir que ces données soient utilisées de manière éthique et sécurisée. La technologie numérique doit aussi être utilisée pour renforcer l'autonomie des patients en leur permettant de prendre des décisions éclairées sur leur santé, tout en contribuant à des avancées significatives dans la recherche médicale.
Basée sur nos travaux, la Fondation présentera ces concepts à travers les enjeux actuels en Suisse liés au projet E-ID d’identité numérique et à son impact sur l’écosystème de la santé.
Journée de la Protection des DonnéesFreedom of Choice in the digital age
The Human Colossus Foundation contributes to the public conference (in french) on Wednesday 28 January 2025 organised by the Faculty of Law, Criminology and Public Administration (FDCA) of the University of Lausanne (registration required).
In the digital age, freedom of choice is profoundly affected by the way data is collected, shared and used. This freedom of choice is closely linked to the notion of privacy.
With the Internet and other networks, we are faced with a wide range of choices in all aspects of our daily lives. Whether it's online shopping, social networking, online banking or healthcare, we are constantly being asked to make decisions at both a personal and professional level. With artificial intelligence tools making their way into our daily lives, is our privacy sphere still well protected to guarantees informational self-determination?
Taking the example of personalised medicine in the context of freedom of choice in the digital age, it becomes clear that access to and control of personal health data are crucial. Personalised medicine is committed to providing individuals with individualised diagnoses and treatments. This requires technological tools to proactively manage their health information to ensure that it is used ethically and securely. Digital technology must also be used to empower patients, enabling them to make informed decisions about their health, while contributing to significant advances in medical research.
Based on its work, the Foundation will present these concepts through the current issues in Switzerland linked to the E-ID digital identity project and its impact on the healthcare ecosystem.
The Human Colossus Foundation is a neutral but technology-savvy Geneva-based non-profit foundation under the surveillance of the Swiss federal authorities.
Subscribe to our newsletter