Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!!
As AI reshapes our digital world, questions of privacy and digital identity become increasingly critical. DIF member Nuggets is tackling these challenges head-on with their Private Personal AI and Verified Identity for AI Agents.
We interviewed CEO Alastair Johnson to learn how Nuggets is pioneering new approaches to protect individual privacy while enabling secure human-AI interactions through their decentralized identity wallet technology. Their insights reveal how enhanced AI capabilities and robust privacy protections can work together to build a more secure digital future.
Can you provide a brief overview of Nuggets and its mission in the digital identity and payments space?
Nuggets is a decentralized identity wallet and payment platform that guarantees trusted transactions, verifiable credentials, uncompromised compliance, and the elimination of fraud.
Our mission is to fundamentally change how personal data is stored, providing unparalleled privacy for everyone and everything and helping to create a radically safer and more secure internet.
What inspired the development of your two new solutions: Private Personal AI and Verified Identity for AI Agents?
Private Personal AI and Verified Identity for AI Agents were driven by the needs of our customers. These customers wanted to use AI in an education and healthcare setting but were worried about the privacy of their user data. They needed to effectively manage consent and identity authentication while ensuring privacy and security.
How does the Private Personal AI solution empower individuals in their interactions with AI systems and ensure data privacy and security?
As we’re seeing AI advancing, human-machine interactions are blurring.
The future demands a private, trustable human-AI interface that allows individuals to control their digital identities and data.
The Nuggets wallet ensures data is private and enables users to authenticate and pay for products and services without sharing and storing their data.
Nuggets empowers users with a self-sovereign wallet that prioritizes personal data protection. Your personal information remains completely private and under your exclusive control. Only you can choose to selectively share specific preferences or data with AI agents, ensuring that your sensitive information stays secure and accessible solely to you at all times.
How does the Verified Identity for AI Agents solution benefit organizations deploying autonomous AI agents, and why is it important for AI agents to have their own sovereign digital identities?
In an era where AI agents are increasingly autonomous, establishing trusted digital identities for these agents has become crucial. Nuggets provides a comprehensive framework that ensures AI agents can operate securely and independently while maintaining accountability and trust.
We establish unique, verifiable digital identities for AI Agents. Each agent receives a sovereign identity that's cryptographically secured and fully auditable, allowing them to interact with systems and services while maintaining clear chains of attribution and responsibility.
Alongside establishing the Agent Identity, we enable Agent Authentication and Authorization with robust security measures to ensure only authorized agents can access business systems and data. This provides secure methods for verifying agent identities, preventing unauthorized access, and maintaining the integrity of AI infrastructure.
How do your solutions address concerns around AI accountability, trust, and potential security threats, as highlighted by industry leaders like those in the Salesforce report?
By having a verified source of data, it is both accountable and can be trusted.
Organizations can safeguard sensitive data from emerging cybersecurity risks by leveraging confidential computing and decentralized self-sovereign identity technologies. Our approach creates a secure, isolated computing environment that prevents unauthorized data access and minimizes the potential for breaches in AI systems. By implementing these advanced protection mechanisms, companies can confidently utilize AI technologies while maintaining strict control over their proprietary and confidential information.
How do you envision these products impacting the adoption of AI technologies across various industries?
Private Personal AI with verified identities represents a transformative approach to AI integration. By addressing critical concerns around privacy, security, and accountability, these technologies could accelerate AI adoption across multiple domains, creating more trustworthy and sophisticated human-AI interactions.
The future of AI lies not just in technological capability but in building systems that fundamentally respect individual privacy and maintain transparent, verifiable interactions.
What are Nuggets' plans for future developments in digital identity and AI?
We’ve got some new products launching in Q1 2025 that place personal privacy and user autonomy at the forefront of AI and AI Agents. We are working closely with partners to create products that respect human agency. Our commitment goes beyond mere compliance—we actively design systems that give users unprecedented control, transparency, and peace of mind in their digital interactions, which is more crucial now than ever.
How can our readers learn more?
If anyone’s interested in learning more about these products and how they could work within their organization we’d love to chat. You can reach out to us here.
Further reading on each product can be done via our website on the following product pages: Private Personal AI and Verified Identity for AI Agents.
In our monthly co-op days, our members spend time away from client work, thinking about bigger-picture stuff. We take the 10,000-foot view of our work together. Last week, at the start of a new year, we came together to discuss how we feel about our work and the state of the world.
We’ve thought about adjacent things before — in 2021, for example we came up with the Spirit of WAO which led to us defining five focus areas in 2023. Last year, we shared our “even overs” in which we outlined how we value, for example, work/life balance even over profit/surplus, learning even over efficiency, and documentation even over speed.
There are plenty of people who, right now, think that a “wait and see” attitude is the right orientation to the world. As our former Mozilla colleague Geoffrey MacDougall has pointed out in a recent blog post, those people are wrong. It’s time to roll our sleeves up and get shit done.
Hamming QuestionsBased on what we’ve discussed before, we pondered what our Hamming Questions might be for this year:
Mathematician Richard Hamming used to ask scientists in other fields “What are the most important problems in your field?” partly so he could troll them by asking “Why aren’t you working on them?” and partly because getting asked this question is really useful for focusing people’s attention on what matters.
After some time thinking alone, we came up with a range of potential Hamming Questions. Discussing their various merits, we realised that a useful way to phrase them is in the form:
“If [X] how might we [Y] instead of [Z]”
We’re still pondering and thinking, but here are three questions we came up with to sharpen our planning, especially given that 2025 is the UN’s International Year of Co-operatives:
If we want a future that respects people and the planet, what stops us from using worker-owned approaches instead of repeating old power structures? If every person’s talents and skills matter, how can we spread open ways to recognise and reward them, instead of sticking to formal qualifications alone? If AI now shapes so many parts of life, how can we think systemically about community-led digital literacies, instead of letting Big Tech set the agenda?As you would expect, we’ve got some priors in each area: an email-based course on how to set-up a worker-owned co-op, our work around Open Recognition, and our new site on AI literacies.
Next stepsAre these the kinds of questions YOU are pondering? Do you have access to funding to help us convene people around these issues? Does your organisation need some help coming up with your own Hamming Questions?
Let us know, either in the comments below, or via email: hello@weareopen.coop
Getting this year off to the right start was originally published in We Are Open Co-op on Medium, where people are continuing the conversation by highlighting and responding to this story.
NEWARK, NJ, January 10, 2025 – It is with great pride that we announce the promotion of Michelle Ferraro to the role of Associate Vice President of Business Development and Member Engagement at Edge. Since joining Edge team, Michelle has consistently exemplified exceptional leadership, strategic vision, and a deep dedication to advancing Edge’s mission. Her contributions to member engagement and business development have been pivotal in driving growth and enhancing the experiences of our member institutions.
In her new role, Michelle will report directly to the Vice President of Marketing, Business Development, and EdgeEvents. She will play a critical leadership role in driving member engagement, cultivating strong partnerships, and ensuring alignment between Edge’s strategies and the needs of our growing membership base. Her responsibilities will include overseeing member relationship management, spearheading sales and revenue growth initiatives, and optimizing processes to support operational excellence.
Michelle’s expertise, professionalism, and passion for fostering connections within our community will undoubtedly continue to elevate Edge’s mission. As a member of our senior leadership team, she will be instrumental in shaping the organization’s direction and ensuring that it consistently delivers value to our members and stakeholders.
About Edge
Edge serves as a member-owned, nonprofit provider of high performance optical fiber networking and internetworking, Internet2, and a vast array of best-in-class technology solutions for cybersecurity, educational technologies, cloud computing, and professional managed services. Edge provides these solutions to colleges and universities, K-12 school districts, government entities, hospital networks and nonprofit business entities as part of a membership-based consortium. Edge’s membership spans the northeast, along with a growing list of EdgeMarket participants nationwide. Edge’s common good mission ensures success by empowering members for digital transformation with affordable, reliable and thought-leading purpose-built, advanced connectivity, technologies and services.
The post Company Announcement:<br />Michelle Ferraro Promoted to Associate Vice President of Business Development and Member Engagement appeared first on NJEdge Inc.
NEWARK, NJ, January 10, 2025 – It is with great pleasure that we announce the promotion of Adam Scarzafava to the role of Vice President of Marketing, Business Development, and EdgeEvents at Edge. Since joining us in 2018, Adam has consistently demonstrated outstanding leadership, strategic insight, and a deep commitment to our mission. His exceptional work in digital marketing and communications has been a driving force behind our growth, and we are confident that in his new position, Adam will continue to elevate our brand, drive new business opportunities, and foster deeper connections within the industry.
In his new role, Adam will oversee the development and execution of Edge’s comprehensive marketing strategy, as well as spearheading our business development initiatives and leading the charge on event management. As part of the Executive Leadership Team, Adam will play a pivotal role in shaping the direction of our company, ensuring that our marketing, business development, and event strategies align with our organizational goals. His expertise will be invaluable as we continue to expand our reach, foster partnerships, and create meaningful opportunities for our members and stakeholders.
Please join us in congratulating Adam on this well-deserved promotion. His dedication, vision, and passion for excellence are qualities that will undoubtedly continue to drive Edge forward.
About Edge
Edge serves as a member-owned, nonprofit provider of high performance optical fiber networking and internetworking, Internet2, and a vast array of best-in-class technology solutions for cybersecurity, educational technologies, cloud computing, and professional managed services. Edge provides these solutions to colleges and universities, K-12 school districts, government entities, hospital networks and nonprofit business entities as part of a membership-based consortium. Edge’s membership spans the northeast, along with a growing list of EdgeMarket participants nationwide. Edge’s common good mission ensures success by empowering members for digital transformation with affordable, reliable and thought-leading purpose-built, advanced connectivity, technologies and services.
The post Company Announcement:<br />Adam Scarzafava Promoted to Vice President of Marketing, Business Development, and EdgeEvents appeared first on NJEdge Inc.
Marc Findon, Nok Nok Labs
Jonathan Grossar, Mastercard
Frank-Michael Kamm, Giesecke+Devrient
Henna Kapur, Visa
Sue Koomen, American Express
Gregoire Leleux, Worldline
Alain Martin, Thales
Stian Svedenborg, BankID BankAxept
Global e-commerce is booming and is expected to reach more than $6T by the end of 20241. Having the ability to sell products online has provided great opportunities for merchants to sell goods and services beyond their local market; however, it comes with increased fraud. In fact, it is estimated that in 2023, global ecommerce fraud was roughly to reach $48B1, with the US accounting for 42% of that and the EU with about 26%.
1.1 Current Challenges in Remote Commerce
There are many types of ecommerce fraud, but the most prevalent type is transaction fraud. Transaction fraud occurs when a transaction is made on a merchant site with a stolen card and/or stolen credentials. Stolen credentials are readily available on the dark web to those who know how to access and use them.
To address those concerns, measures have been introduced to increase the overall security of remote commerce transactions, including tokenization of payment credentials and cardholder authentication. In some countries, regulations are mandating the adoption of either or both measures, such as in India or in Europe (second Payment Services Directive PSD2). These regulations are meant to ensure secure remote transactions; however, they add complexity to the checkout flow, as they may require a switch between the merchant and another interface, such as a bank’s interface.
Unfortunately, additional authentication may add friction which can result in cart abandonment. The main reasons for cart abandonment include a distrust in the merchant website or a complicated check out flow. Customers prefer a simple payment process that doesn’t add friction such as that caused by payment failure, the need to respond to a one-time password (OTP) on a separate device, or the need to login into a banking application.
1.2 How FIDO can help
The use of biometric authentication enabled through the Fast Identity Online (FIDO) Alliance standards is an opportunity to deliver a better user experience during the authentication process and hence reduce the risk of transaction abandonment.
FIDO has established standards that enable phishing-resistant authentication mechanisms and can be accessed from native applications and from the most popular browsers – thereby enabling a secure and consistent experience across the channels used by consumers. FIDO refers to ‘passkeys’ as the FIDO credentials based on FIDO standards, used by consumers for passwordless authentication.
The World Wide Web Consortium (W3C) has developed Secure Payment Confirmation (SPC). SPC is a web API designed to enhance the consumer experience when authenticating to a payment transaction using FIDO authentication, and to simplify compliance with local regulations (such as PSD2 and dynamic linking in Europe).
1.3 Scope
This whitepaper intends to:
Define Secure Payment Confirmation (SPC) and the benefits that it brings when FIDO is used to authenticate payment transactions1 https://www.forbes.com/advisor/business/ecommerce statistics/#:~:text=The%20global%20e%2Dcommerce%20market,show%20companies%20are%20taking%20advantage.
List the current SPC payment use cases that can deliver those benefits and illustrate consumer journeys • Provide a status on SPC support and the list of enhancements that could be added to the web standard to further improve security and user experience 2. Secure Payment Confirmation (SPC) BenefitsSecure Payment Confirmation (SPC) is an extension to the WebAuthn standard, and aims to deliver the following benefits:
A browser native user experience that is consistent across all merchants and banks Cryptographic evidence of authentication (FIDO assertion) including transaction details signed by a FIDO authenticator Cross origin authentication – For example, even if passkeys are created with the bank as the Relying Party, merchants can invoke cardholder authentication with passkeys within their environment, using input parameters received from the bank, so there is no need to redirect the consumer to the bank to authenticate with passkeys.2.1 Browser Native User Experience
SPC introduces a standardized payment context screen showing details such as a merchant identifier, the card logo, the last 4 digits of the card number, and the transaction amount. The consumer is invited to explicitly agree to the transaction information displayed and then authenticate. Therefore, SPC can be experienced as a mechanism to collect consent from the consumer about the transaction details.
As in standard WebAuthn, the payment context screen is controlled by the user’s browser which renders common JavaScript presentation attacks ineffective. The screen provides increased security, as it ensures that malicious web content cannot alter or obscure the presentation of the transaction details to the user – the browser display always renders on-top of the web content from the triggering website. Figure 1 depicts an example of the SPC experience in chrome.
Figure 1 Example of SPC experience in chrome
2.2 Generation of FIDO Assertion
With SPC, the transaction-related information displayed to the consumer, such as the merchant identifier and transaction amount, is sent securely to the FIDO authenticator and is signed by the same authenticator (transaction data signing).
The FIDO assertion generated by the authenticator reinforces compliance with some regulations as it does with the dynamic linking requirement under PSD2 in Europe, because the merchant identifier and transaction amount will be signed by the authenticator itself. When combined with the browser native user experience described in section 2.1, the relying party can be confident that the user was shown and agreed to the transaction details.
2.3 Cross Origin Authentication
When using FIDO without SPC, a consumer that creates a passkey with a relying party will always need to be in the relying party’s domain to authenticate with that passkey. In the remote commerce payment use case, this means that the consumer typically needs to leave the merchant domain and be redirected to the bank’s domain for authentication.
With SPC, any entity authorized by the relying party can initiate user authentication with the passkey that was created for that relying party. For example, a merchant may be authorized by a bank to authenticate the cardholder with the bank’s passkey.
Note that the mechanism for the relying party to authorize an entity to invoke SPC may vary. For example, a bank may share FIDO credentials with the merchant during an EMV 3DS interaction or through another integration with a payment scheme. The merchant will then be able to use SPC to initiate the payment confirmation and authentication process with a passkey, even if that passkey was created with the bank. Ultimately, the bank maintains the responsibility to validate the authentication.
2.4 Interoperability With Other Standards
SPC can be used in combination with other industry standards such as EMV 3-D Secure and Secure Remote Commerce (SRC), both of which are EMVCo global and interoperable standards.
3. SPC Use CasesSPC can be used to streamline payments in a variety of remote commerce checkout scenarios such as guest checkout or a checkout using a payment instrument stored on file with a merchant.
In each of those payment scenarios, the relying party may be the issuer of the payment instrument (the bank), or a payment network on behalf of the bank.
The flows provided in this Chapter are for illustrative purposes and may be subject to compliance with applicable laws and regulations.
3.1 SPC With Bank as Relying Party
The creation of a passkey can be initiated outside of or during the checkout process:
Within the banking interface: For example, when the consumer is within the banking application and registers a passkey with their bank, in which case the passkey will be associated to one or multiple payment cards and to the consumer device Within the merchant interface: For example, when the consumer is authenticated by the bank during an EMV 3DS flow and is prompted to create a passkey with the bank to speed up future checkouts – in which case the passkey will be associated to the payment card used for the transaction (and to additional payment cards depending on the bank’s implementation), as well as to the device used by the consumerFigure 2 depicts the sequence (seven steps) of a passkey creation during a merchant checkout, where the merchant uses EMV 3DS and the consumer is authenticated by their bank:
Figure 2: Passkey creation during checkout
Once the passkey creation is complete, any merchant that has received the passkey information (which includes FIDO identifiers and Public Key) from the bank, through a mechanism agreed with the bank or the payment scheme, will be able to use SPC. Such a mechanism may include EMV 3DS or another integration with the payment scheme. For example, a merchant who implements EMV 3DS (i.e., version 2.32) will be able to benefit from SPC through the following steps:
1. When the merchant initiates EMV 3DS to authenticate the consumer, the bank decides whether an active authentication of the cardholder is necessary. If the decision is to perform the active authentication of the cardholder, the bank can first retrieve one or several passkeys associated with the card used for the transaction, verify that the consumer is on the same registered device, and then returns the passkey(s) information to the merchant.
2. The merchant invokes the SPC web API to a SPC-supporting browser, including a few parameters in the request, such as the passkey information, card / bank / network logos, the merchant identifier and the transaction amount.
3. If the browser can find a match for one of those passkeys on the device used by the consumer, the browser displays the logos, merchant identifier and the transaction amount to the consumer, and prompts for authentication with the passkey.
4. The authentication results are returned to the merchant, who in turn will share those results with the bank for validation through the EMV 3DS protocol.
Figure 3 depicts an example of an authentication flow using SPC and EMV 3DS, with a previously registered passkey:
Figure 3: Authentication sequence using SPC and EMV 3DS
3.2 SPC With Payment Scheme as Relying Party
In some payment scenarios, payment schemes can be a relying party on-behalf of the banks to remove the need for banks to deploy a FIDO infrastructure, thereby scaling the adoption of passkeys faster.
The creation of a passkey can be initiated outside of or during the checkout process:
Outside of the checkout: for example, when the consumer is within the banking application and the bank invites the consumer to create a passkey for faster and more secure transactions, the passkey can be created with the payment scheme as the relying party, and will be associated by the payment scheme to one or multiple payment cards and to the consumer device; or Before, during or after a checkout: for example, the consumer may be prompted to create a passkey for faster and more secure transactions at merchants supporting the payment scheme’s payment method. The passkey will be associated by the payment scheme to one or multiple payment cards and to the consumer device, once the identity of the consumer has been verified by the bank. Figure 4 depicts this sequence.Figure 4 Passkey creation during checkout
Once the passkey creation is complete, any merchant that is using the authentication facilitated by the payment scheme will be able to benefit from SPC:
The merchant checks with the payment scheme that a passkey is available for the card used in the transaction and retrieves the passkey information from the payment scheme. The merchant invokes the SPC web API with the merchant identifier and transaction amount. If the browser can find a match for one of those passkeys on the device used by the consumer, the browser displays the merchant identifier and the transaction amount to the consumer, card / bank / network logos, then prompts for authentication with the passkey. The authentication results are returned to the payment scheme that validates the results. The payment scheme shares those results with the bank, during an authorization message, for the bank to review and approve the transaction. Figure 5 shows this sequence.Figure 5: Authentication sequence using SPC
(left to right)
1. & 2. Checkout at the merchant’s store
3. Passkey is found, transaction details displayed and consent is gathered
4. Device authenticator prompts cardholder for gesture
5. Confirmation of gesture
6. Transaction completed by the merchant
3.3 Summary of SPC Benefits
The benefits provided by SPC include:
Cross-origin authentication – Any merchant authorized by a Relying Party can request the generation of a FIDO assertion during a transaction even when they are not the relying party. This provides a better user experience as there is no redirect that is required to the relying party to perform consumer authentication. Consistent user experience with increased trust – With SPC, the consumer has a consistent user experience across all merchants and independently of who plays the role of relying party. In each case, the consumer will see a window displayed by the browser, that includes payment details, the logos of their card / bank / payment scheme, increasing the trust in using FIDO authentication for their payments. Increased security – With SPC, the FIDO assertion will include payment details in the cryptogram generation such as the merchant identifier and transaction amount, making it difficult to modify any of those details in the transaction without being detected by the bank or payment scheme. This also simplifies the compliance with local regulations such as PSD2 regulation related to dynamic linking. 4. Status of SPC Support and Future Enhancement4.1 Availability
Secure Payment Confirmation is currently published as a W3C Candidate Recommendation, and there is on going work to include this as an authentication method in EMVCo specifications.
At the time of writing, the availability of the Secure Payment Confirmation API is limited to:
Google Chrome and Microsoft Edge browsers MacOS, Windows, and Android operating systems.4.2 Future Enhancements
The W3C Web Payments Working Group continues to work and iterate on Secure Payment Confirmation with the goal of improving the security and the user experience when consumers authenticate for payments on the web.
Features currently under consideration include:
Improve user and merchant experiences when there is not a credential available on the current device (i.e., a fallback user experience) Improve consumer trust with additional logos being displayed to the user, such as bank logo and card network logo Improve security with support for device binding, with the browser providing access to a browser/device-bound key Consider additional use cases such as recurring payments or support for roaming and hybrid FIDO authenticatorsAn example of enhanced SPC transaction UX that is under review is illustrated in Figure 6.
Figure 6: SPC transaction UX under review
5. ConclusionSecure Payment Confirmation (SPC) is a web standard that has been designed to facilitate the use of strong authentication during payment transactions with best-in-class user experience, where the relying party can be a bank or a payment scheme.
The main benefits of SPC are to deliver an improved user experience, with the display of transaction details that the consumer approves with FIDO authentication, and to enable cross-origin authentication when a merchant authenticates a consumer without the need to redirect to the relying party (the bank or the payment scheme).
SPC also facilitates the inclusion of the transaction details within the FIDO signature, which can help deliver higher security and/or simplify the compliance with local regulations.
6. AcknowledgementsThe authors acknowledge the following people (in alphabetic order) for their valuable feedback and comments:
Boban Andjelkovic, BankID BankAxept John Bradley, Yubico Karen Chang, Egis Jeff Lee, Infineon Olivier Maas, Worldline 7. References[1] “EMV 3-D Secure,” [Online]. Available: https://www.emvco.com/emv-technologies/3-d-secure/.
[2] “Secure Payment Confirmation,” [Online]. Available: w3.org/TR/secure-payment-confirmation/.
[3] “Secure Remote Commerce,” [Online]. Available: https://www.emvco.com/emv-technologies/secure remote-commerce/.
Please join is in congratulating the LEXIDMA TC on this milestone
OASIS and the LEXIDMA TC are pleased to announce that DMLex V1.0 CS01 is now available as a Committee Specification.
DMLex is a data model for modelling dictionaries (here called lexicographic resources) in computer applications such as dictionary writing systems. DMLex is a data model, not an encoding format. DMLex is abstract, independent of any markup language or formalism. At the same time, DMLex has been designed to be easily and straightforwardly implementable in XML, JSON, NVH, as a relational database, and as a Semantic Web triplestore.
The documents and all related files are available here:
LEXIMDA Data Model for Lexicongraphy (DMLex) V1.0
Committee Specification 01
08 November 2024
Editable Source (Authoritative):
https://docs.oasis-open.org/lexidma/dmlex/v1.0/cs01/dmlex-v1.0-cs01.pdf
HTML:
https://docs.oasis-open.org/lexidma/dmlex/v1.0/cs01/dmlex-v1.0-cs01.html
Schemas:
XML: https://docs.oasis-open.org/lexidma/dmlex/cs01/schemas/XML/
JSON: https://docs.oasis-open.org/lexidma/dmlex/cs01/schemas/JSON/
RDF: https://docs.oasis-open.org/lexidma/dmlex/cs01/schemas/RDF/
Informative copies of third party schemas are provided:
https://docs.oasis-open.org/lexidma/dmlex/cs01/schemas/informativeCopiesof3rdPartySchemas
For your convenience, OASIS provides a complete package of the specification document and any related files in ZIP distribution files. You can download https://docs.oasis-open.org/lexidma/dmlex/cs01/dmlex-v1.0-cs01.zip
The post LEXIDMA TC’s DMLex V1.0 is now a Committee Specification appeared first on OASIS Open.
1 Introduction to the Showcase Program Secure Digital Identities
The German Federal Ministry for Economic Affairs and Climate Action (BMWK) is the initiator and funder of the „Secure Digital Identities“ Showcase Program. Over the course of four years (2021 to 2024), the four showcase projects – IDunion, ID-Ideal, ONCE, and SDIKA – have worked on more than 100 use cases related to secure digital identities. These projects have also developed various types of wallets, which have been tested and implemented in multiple pilot environments. The Showcase Program has supported Research & Development (R&D) efforts, resulting in the creation of seven edge wallets, three organizational wallets, and one cloud wallet. Within this framework, IDunion has specifically focused on developing organizational identities, including use cases such as „Know-your-supplier.“ This paper outlines the cost savings achieved through automated supplier master data management, leveraging EUDI wallets for legal entities and the EU Company Certificate Attestation (EUCC) issued by QEAA providers in accordance with Company Law. The paper was prepared by the scientific research provider for the program, „Begleitforschung Sichere Digitale Identitäten,“ led by the European School for Management and Technology (ESMT), on behalf of the BMWK.
2 Cost saving estimation
Current situation: Currently corporations maintain their supplier and customer master data records manually, which is time-consuming and leads to errors and redundancies. Large corporations need to maintain and assure high quality of several hundred up to millions of master data records. The maintenance costs per single data set was estimated to 11 €/year. The master data set considered for the cost estimation was limited to company name and address data and therefore is a subset of the data that will be available with the PID for legal person and the EUCC.
Solution based on EUDIW: EU Digital Identity Wallets (EDIW), PID for legal entities and public registry extracts (e.g. EUCC) as QEAAs enable almost completely automated management of business partner data. Suppliers present their attestations from their legal entity wallet to customers or legal entity wallets of other business partners. Presentation, verification and the transfer to internal systems is performed automatically. This reduces the number of proprietary data records maintained in parallel and minimizes manual, error-prone data entry.
Cost savings: The solution enables annual savings of estimated €85 billion for German companies. Only German companies with more than 2 million sales/year where included in this estimation. It was assumed that only their European business partners provide their data as verifiable attestations. This underscores the transformative impact of the EUDIW solution on master data management and its strategic importance for the private sector on the path to digital efficiency.
Conservative assumptions for the estimation model below1
Estimation of master data sets: The estimation is done by estimating the number of potential B2B relationships of ompanies and assuming that a B2B relationship generates at least one master data set. In practice, however, master data is often stored and replicated in different systems. As this is not considered, the cost savings in the estimation are therefore calculated conservatively. Annual master data maintenance costs: On average, a company incurs annual costs of around €11 per master data maintenance. This estimation is based on an estimation performed by „Verband Deutscher Automobilhersteller“ (VDA). Number of master data sets for large companies: An average of 300,000 master data was assumed for large companies based on project estimates and VDA work. It was also assumed that 60% of the master data per company is attributable to the EU suppliers (i.e. 180,000 master data items on average for large companies) and therefore only these are relevant for the EUDIW-based solution. Scaling based on turnover: The estimated number of B2B relationships of large companies can be scaled to other company sizes based on turnover2 Implementation costs: The implementation costs are assumed to be €600 per year for small companies (<€10 million turnover). These costs are scaled to the larger company categories based on turnover. In addition to the implementation costs, companies must purchase the mentioned attestations (LPID, EUCC). The assumed costs are €1,000 per year. These costs are independent of the size of the company. Further implementation costs such as integration into ERP/CRM modules are neglected, as it is assumed that the market leaders will integrate the EUDIW modules accordingly. Very small companies: Due to their high number and heterogeneity in turnover and employee structure, very small companies are not included in the modeling, which leads to a more conservative savings estimate3Estimation model4
Potential savings for German companies5
Current costs for supplier master data maintenance€ 85.3bnImplementation costs€ 0.35bnAnnual costs for the EU Digital Wallet€ 0.25bnPotential total savings€ 84.7bnCurrent master data maintenance costs
Enterprise sizeNumberCosts per company6Total costsBig Enterprises715,500€ 2m€ 30.7bnSmall Medium Enterprises850,500€ 0.8m€ 40.4bnSmall Enterprises9185,000€ 0.08m€ 14.2bnTotal Maintenance Costs€ 85.3bnImplementation Costs
Enterprise SizeNumberCost per enterprise10Total CostsBig Enterprises15,500€ 6,300€ 98mSmall Medium Enterprises50,500€ 2,500€ 126mSmall Enterprises185,500€ 600€ 111mTotal Implementation Costs€ 335m (€ 0.35bn)Annual EUDI wallet costs
Enterprise SizeNumberCost per enterprise11Total CostsBig Enterprises15,500€ 1,000€ 15.5mSmall Medium Enterprises50,500€ 1,000€ 50.5mSmall Enterprises185,500€ 1,000€ 185mTotal Implementation Costs€ 251m (€ 0.25bn) Unless otherwise stated, the source is based on the calculations and statements of the IDunion project and theTrace Labs, the core builders behind the OriginTrail ecosystem, is pleased to announce the expansion of its advisory board with the addition of Fady Mansour, lawyer and partner with Friedman Mansour LLP and Managing Partner at Ethical Capital Partners. With his wide breadth of experience, Mr. Mansour brings important expertise in regulatory matters, particularly in online data protection.
In his advisory role, Mr. Mansour will provide strategic guidance to bolster OriginTrail’s strategic importance for combating illicit online content, safeguarding intellectual property, and fostering reliable AI applications for a safer digital landscape in its Internet-scale ambition.
OriginTrail ecosystem, powered by decentralized knowledge graph technology, is dedicated to promoting responsible AI and sustainable technology adoption. By joining the advisory board, Mr. Mansour will be instrumental in shaping Trace Labs’ mission to drive ethical, human-centric technological innovation across industries.
Mr. Mansour completes the Trace Labs advisory board of existing members:
Dr. Bob Metcalfe, Ethernet founder, Internet pioneer and 2023 Turing Award Winner; Greg Kidd, Hard Yaka founder and investor; Ken Lyon, global expert on logistics and transportation; Chris Rynning, Managing Partner at AMYP Venture — Piëch — Porsche Family Office; Toni Piëch, Founder & Chair of Board at Toni Piëch Foundation & Piëch Automotive; Fady Mansour, Managing Partner at Ethical Capital Partners.Trace Labs, Core Developers of OriginTrail, Welcomes Fady Mansour to the Advisory Board was originally published in OriginTrail on Medium, where people are continuing the conversation by highlighting and responding to this story.
In the realm of artificial intelligence (AI), particularly in robotics, trust is not just a luxury — it’s a necessity. The Three Laws of Robotics, conceptualized by the visionary Isaac Asimov, provide a well-known foundational ethical structure for robots:
A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.Ensuring these laws are adhered to in practice requires more than just programming; it necessitates a system where the knowledge upon which AI agents operate is transparent, verifiable, and trusted. This is where OriginTrail Decentralized Knowledge Graph (DKG) comes into play, offering a groundbreaking approach to enhancing the trustworthiness of AI.
Transparency and verifiabilityOne of the key aspects of the DKG is its capacity for transparency. By organizing AI-grade Knowledge Assets (KAs) in a decentralized manner, DKG ensures that the data AI agents use to make decisions can be traced back to their origins, with any tampering or modifications of that data being transparently recorded and verifiable on the blockchain. This is crucial for the First Law, where transparency in data sourcing can prevent AI from making decisions that might harm humans due to incorrect or biased information.
Ownership and controlThe DKG allows for each Knowledge Asset to be associated with a non-fungible token (NFT), providing clear ownership and control over the information. This aspect directly impacts how AI agents adhere to the Second Law. Namely, by allowing agents to own their knowledge, DKG empowers AI agents to respond to human commands based on a robust, reliable data set that they control, ensuring they follow human directives while also adhering to the ethical boundaries set by the laws. This capability also allows agents to monetize Knowledge Assets that they have created (i.e. charge other agents (AI or human) for accessing their structured data), enabling agents’ economic independence.
Contextual understanding and decision-makingThe semantic capabilities of DKG provide AI with a richer context for understanding the world — an ontological, symbolic world model to complement GenAI inferencing, which is vital for the Third Law. The interconnected nature of knowledge in the DKG means it is contextualized better, allowing AI to make decisions with a comprehensive view of the situation. For example, understanding the broader implications of self-preservation in contexts where human safety is paramount ensures that robots do not prioritize their existence over human well-being.
Building trust through decentralizationDecentralization is at the heart of the DKG’s effectiveness in fostering trust:
Avoiding centralized control: Traditional centralized databases can be points of failure or manipulation, especially in multi-agent scenarios. In contrast, DKG distributes control, reducing the risk of misuse or bias in AI decision-making. This decentralized approach helps build a collective, trustworthy intelligence that aligns with human values and safety. Community contribution: DKG facilitates a crowdsourced approach to knowledge, where contributions from various stakeholders can enrich the AI’s understanding of ethical and practical scenarios, further aligning AI behavior with the Three Laws. This community aspect also encourages ongoing vigilance and updates to the knowledge base, ensuring AI systems remain relevant and safe. Grow and read AI Agents’ minds with the ChatDKG framework powered by DKG and ElizaOSThe upgrade of ChatDKG marks a pioneering moment, combining the power of the OriginTrail Decentralized Knowledge Graph (DKG) with the ElizaOS framework to create the first AI agent of its kind. Empowered by DKG, ChatDKG utilizes the DKG as collective memory to store and retrieve information in a transparent, verifiable manner, allowing for an unprecedented level of interaction where humans can essentially “read the AI’s mind” by accessing its data and thought processes. This unique feature not only enhances transparency but also fosters trust between humans and AI.
The integration with ElizaOS is based on a dedicated DKG plugin, with which ElizaOS agents can create contextually rich knowledge graph memories, storing structured information about their experiences, insights, and decisions. These memories can be shared and made accessible across the DKG network, forming a collective pool of knowledge graph memories. This allows individual agents to access, analyze, and learn from the experiences of other agents, creating a dynamic ecosystem where collaboration drives network effects between memories. See an example memory knowledge graph created by the ChatDKG agent here.
Tapping into collective memory will be enhanced with strong agent reputation systems and robust knowledge graph verification mechanisms. Agents can assess the trustworthiness of shared memories, avoiding hallucinations or false data while making decisions. This not only enables more confident and precise decision-making but also empowers agent swarms to operate with unprecedented coherence and accuracy. Whether predicting trends, solving complex problems, or coordinating large-scale tasks, agents will be able to achieve a new level of intelligence and reliability.
Yet, this is only the beginning of the journey toward “collective neuro-symbolic AI,” where the synthesis of symbolic reasoning and deep learning, enriched by shared, verifiable knowledge, will redefine the boundaries of artificial intelligence. The possibilities for collaborative intelligence are limitless, paving the way for systems that think, learn, and evolve together.
Moreover, ChatDKG invites users to contribute to its memory base, growing and refining its knowledge through direct interaction. This interactive approach leverages the ElizaOS framework’s capabilities to ensure that each exchange informs the AI and enriches its understanding, making it a dynamic participant in the evolving landscape of knowledge.
Talk to the ChatDKG AI agent on X to grow and read his memory!
Bridging trust between humans and AI agents with Decentralized Knowledge Graph (DKG) and ElizaOS… was originally published in OriginTrail on Medium, where people are continuing the conversation by highlighting and responding to this story.
Six months ago, Hyperledger Indy on Besu officially joined the did:indy method with the introduction of the did:indy:besu identifier. This milestone brought Hyperledger Indy, an LF Decentralized Trust project, closer to becoming a key player in Self-Sovereign Identity (SSI) frameworks, with the potential to be a Trusted List Provider in the European Digital Identity Wallet (EUDI Wallet) under eIDAS 2.0. By aligning with W3C Verifiable Credentials (VC) and Decentralized Identifiers (DID) standards, Indy on Besu enhances interoperability, scalability, and usability for digital identity solutions.
As 2025 approaches, supply chain trends like digital transformation, AI, sustainability, and smart logistics remain top of mind.
In this episode, James Chronowski, Vice President of Strategic Account Management at GS1 US, joins hosts Reid Jackson and Liz Sertl to explore how data quality plays a crucial role in addressing these trends. James offers practical insights for businesses to tackle emerging challenges and seize opportunities in an evolving supply chain landscape.
In this episode, you’ll learn:
The top trends shaping the supply chain industry in 2025
Why data quality and governance are essential for businesses
How to build resilient supply chains in a rapidly changing environment
Jump into the conversation:
(00:00) Introducing Next Level Supply Chain
(02:36) Current and future trends in the supply chain
(08:20) The foundational role of data governance
(11:42) How businesses can be more resilient in 2025
(13:52) James Chronowski’s favorite tech
Connect with GS1 US:
Our website - www.gs1us.org
Connect with the guest:
James Chronowski on LinkedIn
As the European Union continues to strengthen its cybersecurity framework, the NIS2 Directive serves as a pivotal measure to protect critical infrastructure and essential services across member states.
Committed to advancing secure and interoperable digital identity standards, the OpenID Foundation welcomes the opportunity to contribute to this critical initiative and has provided comments on ENISA’s draft technical guidance for the cybersecurity measures of the NIS2 Directive.
Below, we highlight key themes and recommendations from our review.
Taking compliance monitoring beyond paper-based processesENISA’s guidance on compliance monitoring emphasizes regular reviews and reporting to management bodies. However, we believe this framework can be strengthened to incorporate empirical testing and global standards-based approaches.
Compliance monitoring should, for example, evaluate and certify implementations against relevant global standards. Certification ensures systems are built on a robust foundation and remain aligned with evolving requirements.
Processes also need to be more empirical. Paper-based compliance captures the intent of a solution but can fail to capture changes in deployed systems and can also lag evolving adversarial tactics. Real-time monitoring and periodic recertification are critical to address these gaps.
The OpenID Foundation’s recommendations are to incorporate steps to evaluate applicable global standards, test implementations for conformity to standards, certify compliance, and maintain real-time reporting to highlight non-conformant implementations.
Ensuring robust independent security reviewsIndependent reviews are vital for assessing security practices. The OpenID Foundation emphasizes the importance of aligning such reviews with global standards.
When it comes to certification and self-certification, implementations of security protocols, such as OpenID Connect and FAPI, should undergo technical conformance testing and certification. This ensures interoperability and security across ecosystems.
Continuous testing also needs to be considered. Cloud-based and dynamic environments require ongoing testing to detect implementation issues in real-time.
The OpenID Foundation recommends that steps be taken to document technical certifications and ensure real-time conformance testing as part of the review process.
Expanding the scope of security testingThe guidance outlines a range of security tests, but omits protocol conformance testing. This is a critical measure for detecting implementation errors in security protocols.
The OpenID Foundation recommends revising the guidance to include protocol conformance testing alongside vulnerability assessments, penetration testing, and other methodologies. Additionally, OIDF emphasizes the importance of certification in ensuring secure implementations.
Network security addressing interoperability challengesModern systems increasingly rely on external integrations and APIs, creating a complex web of dependencies. ENISA’s guidance on network security should address these realities. All endpoints, especially those involving external integrations, should be regularly tested and certified for conformance.
Furthermore, integrating shared signals frameworks can enhance real-time risk detection and response, particularly in scenarios involving session or credential lifecycle changes.
The OpenID Foundation recommends adding provisions for endpoint testing and integration of shared signal protocols for dynamic decision-making and enhanced security.
Strengthening policies and implementation for access controlThe guidance on access control would benefit from additional considerations. The incorporation of data classification and risk appetite into access control decisions will help ensure that controls are tailored to the sensitivity and criticality of assets. Another consideration is enabling real-time revocation of access rights based on signals indicating changes in risk.
The OpenID Foundation recommends including references to asset classification and shared signals frameworks to enhance access control policies.
Clarifying the scope of authentication and authorizationWhile the guidance addresses authentication, it often extends into areas of authorization without explicitly acknowledging the distinction. Clear terminology is essential to avoid misunderstandings. A starting point would be to rename the section to ‘Authentication and Authorization’ to reflect its broader scope.
Further, specifying secure protocols like OpenID Connect, FAPI, and Shared Signals Framework would help ensure implementations can effectively mitigate specific security risks.
The OpenID Foundation recommends renaming the section and providing detailed examples of secure authentication and authorization practices, including conformance testing.
Secure communication protocols for privileged and administrative accountsManaging privileged accounts is a high-risk area that requires stringent controls, but the current guidance simply calls for policies for managing privileged accounts as part of access control.
The OpenID Foundation recommends referencing secure communication protocols for privileged account management. The need for rigorous authentication measures, linking to broader authentication requirements and the use of secure communication protocols, should also be highlighted clearly for implementers.
More accountability in identity managementGuidance around identity management highlights that organizations should maintain an inventory of user and privileged identities. However, more comprehensive identity management procedures and technology are needed to ensure risk is minimized and that there is clear accountability.
The OpenID Foundation recommends clarifying the term ‘service identities’ and providing examples. Details on the privileges associated with each identity should also be included, and identity lifecycle processes should be documented to ensure ongoing accountability.
Why global standards matterThe OpenID Foundation commends ENISA for its comprehensive guidance and commitment to improving cybersecurity across the EU and urges further alignment with global standards.
Standards, such as OpenID Connect and FAPI, ensure security, interoperability, and scalability across digital identity ecosystems. By incorporating them, ENISA can further enhance the effectiveness of cybersecurity measures, reduce fragmentation across EU member states, and foster trust and collaboration among stakeholders.
The OpenID Foundation remains committed to supporting ENISA and the broader cybersecurity community through open standards and constructive dialogue. We welcome the opportunity for follow-up discussions and stand ready to provide further input as needed.
About the OpenID FoundationThe OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy-preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.
The post Strengthening cybersecurity measures – the OpenID Foundation’s recommendations on ENISA’s guidance for the NIS2 Directive first appeared on OpenID Foundation.
The BeL2 Arbiter Network enables decentralized, non-custodial Bitcoin DeFi by connecting Bitcoin’s mainnet to EVM-compatible smart contracts through cryptographic proofs. This marks a pivotal moment where Bitcoin can interact and communicate with other blockchains, uniting years of innovation into a new non-custodial financial layer built on digital gold.
Arbiters play a critical role in this ecosystem, ensuring transaction fairness, resolving disputes, and earning fees. With the recent deployment of BeL2’s Beta Arbiter Network, now is the perfect opportunity to become one of the first to support the system and earn rewards, including the upcoming incentive program launching this Wednesday!
In this article, we provide a comprehensive guide on getting started, complete with a PDF and video support. Alongside the videos linked below, use the provided PDF as detailed documentation to assist you in the setup process. If you wish to instead speak with an AI Assistant, our BeL2 GPT is available 24/7 to help you with this task too. Okay, let’s jump in!
1. BeL2 Arbiter PrerequisitesBefore beginning the setup, ensure you have the following:
Hardware Requirements: CPU: Minimum 2 cores RAM: 8GB+ Storage: 100GB SSD+ Network: 100Mbps internet connection Software and Wallets: Elastos Smart Chain Wallet: Funded with ELA tokens. Metamask Browser Extension: For Arbiter registration. Bitcoin Wallet: Such as Unisat or OKX browser extension. Go (1.20 or newer): Install from Go’s official site. Mobile: Use Web3 Essentials’ built-in browser. Desktop: Use Metamask and Unisat wallets. Should you need to add the Elastos Smart Chain (ESC) network on Metamask, please refer to https://elastos.info/explorers/ Elastos networks section 2. Register as an BeL2 ArbiterPlease follow the the provided PDF for detailed documentation.
Congratulations! By completing this setup, you’ll contribute to a decentralized BTC finance ecosystem while earning rewards as an Arbiter. This week, we will be announcing the community incentives program where you can earn additional ELA rewards for participation in the system, including Native Bitcoin lending integration for the first utility access to the BTC lending DApp. Did you enjoy this article? To learn more, follow Infinity for the latest updates here!
We are thrilled to announce that the Camino Network Foundation has joined the Decentralized Identity Foundation (DIF) as our newest member. This partnership marks an exciting development in our collective goals to build an open ecosystem for decentralized identity, particularly in the global travel space.
Advancing Digital Identity in TravelCamino Network brings unique expertise in blockchain technology for the travel industry, operating a specialized layer-one blockchain designed specifically for travel-related use cases. Their focus on creating a permissioned yet public infrastructure for travel industry stakeholders enables interoperable, secure digital identity systems.
Shared Vision for Individual ControlWhat makes this partnership particularly compelling is our shared commitment to individual-controlled identity. Camino Network's architecture, which emphasizes regulatory compliance while maintaining user privacy and control, demonstrates their dedication to building responsible identity solutions that put users first.
Technical Innovation Meets Real-World ApplicationCamino Network's technical capabilities feature sub-second transaction finality, high throughput capacity of 4,500 transactions per second, energy-efficient consensus mechanism, and strong security features.
These technical foundations provide an excellent platform for implementing decentralized identity solutions that can scale to meet the demanding needs of the global travel industry.
Looking ForwardTogether with the Camino Network Foundation, we look forward to:
Collaborating on educational initiatives, including webinars, hackathons, and related events Investigating cutting-edge identity solutions for travel-focused use cases Establishing standards that align traditional travel industry needs with decentralized identity principles Delivering more efficient and secure digital identity management options for travelers Promoting the real-world adoption of decentralized identity in travel applicationsThe addition of the Camino Network Foundation to DIF strengthens our community's ability to deliver practical, scalable identity solutions. Their expertise in building industry-specific blockchain infrastructure, combined with their understanding of travel industry requirements, will be invaluable as we work together to advance the state of decentralized identity.
Get InvolvedWe encourage all DIF members to welcome the Camino Network Foundation and explore potential collaborations. Welcome aboard, Camino Network Foundation! We're excited to work together in building the future of digital identity.
The post Results of 3rd Annual Elections to the Board of the Velocity Network Foundation appeared first on Velocity.
ABSTRACT: By grounding technical decisions in ethical values, we can create compassionate digital architectures. This article examines how core human values such as dignity, autonomy, and human rights inform the design of trustworthy digital systems to enable progressive trust, safeguard privacy, promote individual choice, and build resilient systems resistant to coercion.
As we enter 2025, I’m reflecting on a journey of decades that has been dedicated to advancing privacy, security, and human autonomy in the digital age. My body of work dates back to the 1990s, which saw my early contributions with cryptographic pioneers and my co-authorship of the IETF TLS 1.0 standard. But this year marks the 10th anniversary of the first “Rebooting Web of Trust” workshop, which was a real milestone for my leadership role in shaping secure technologies such as Self-Sovereign Identity and the W3C Decentralized Identifiers standard.
Over the past decade, my focus as a trust architect has sharpened on designing digital systems that empower individuals while respecting core values such as autonomy and human dignity. These designs play a critical role in how individuals express themselves, engage with communities, and pursue their aspirations in a world increasingly shaped by digital interactions.
Yet, this digital realm presents a dual reality. While it opens up unprecedented opportunities, it also makes us increasingly vulnerable to exploitation, coercion, and pervasive surveillance. This tension places a profound responsibility on architects of digital systems: we must ensure that technical designs are guided by deeply rooted human values and ethical principles.
Looking ahead to the next ten years, I reaffirm my commitment to these values, charting a course for the future that places human flourishing and trust at the center of technological progress. But to fulfill this commitment requires the complex answer to a simple question: how can we design systems that uphold dignity, autonomy, and human rights?
The Core Values of Autonomy & DignityWhen we design digital systems, we’re not just creating technical specifications. We’re crafting spaces where people will live significant portions of their lives. To give them the ability to truly live and excel, we must give them automony: a digital system must empower individuals to control their own destinies within this digital realm. To do so, it must provide them with tools that:
Protect their data. Exercise control over their digital presence. Ensure freedom from coercion. Cultivate trust through direct, transparent & efficient peer-to-peer interactions. Facilitate interactions built on trust and agency. Enable meaningful participation in the digital economy. Support engagement that aligns with their values and priorities. Foster resilience against systemic vulnerabilities. Operate seamlessly across jurisdictions and political boundaries(See my “Principles of Dignity, Autonomy, and Trust in Digital Systems” in the Appendix for a more extensive look at what I consider core values for digital system design.)
Providing individuals with digital autonomy is mirrored by the concept of digital dignity. A digital system that prioritizes dignity respects the individuality of its users and safeguards their right to privacy. It minimizes the data collected, provides clear and revocable consent mechanisms, and ensures that control remains in the hands of the user. A dignified system doesn’t simply protect; it fosters agency and participation, allowing individuals to thrive without fear of surveillance, discrimination, or exploitation.
Autonomy is also closely linked to the concept of trust. You must be able to know and trust your peers in order to truly have the autonomy to make meaningful decisions. This is where systems like progressive trust come in.
A system built on autonomy, dignity, and trust ultimately treats individuals as more than their administrative identities; it recognizes that individuals possess an ineffable core of self that transcends digital representation. The first principle of Self-Sovereign Identity, ‘Existence,’ upholds this kernel of individuality, affirming that any digital identity must respect and support the inherent worth of the person behind it.
To properly respect autonomy and dignity also requires careful attention to power dynamics and accountability. Distinct standards of transparency and privacy should address the power imbalances between individuals and institutions. Achieving this balance involves respecting individual privacy while enabling appropriate oversight of powerful institutions. We must protect the vulnerable while ensuring our larger administrative systems remain fair and just.
We must also address the crucial question: how do we make privacy-preserving technology economically accessible to everyone? Any autonomy-enabling digital system must balance individual and collective interests by supporting sustainable development of digital infrastructure while fostering individual economic sovereignty and resilience. We must reward contributions to shared resources, uphold autonomy and self-determination, and ensure equitable access to rights-preserving technologies. By protecting individual freedoms and enabling fairness, privacy can ultimately be a tool that encourages participation regardless of economic means.
Decentralized identity wallets offer an example of how to embody the characteristics of autonomy, dignity, and trust, while also considering issues such as privacy, balance, and accessibility. They empower individuals to securely prove their credentials (such as educational achievements or professional certifications) directly to peers, without relying on central authorities that could arbitrarily deny their accomplishments. Consider Maria, a small business owner living in a vibrant but economically challenged favela neighborhood in Buenos Aires, Argentina. Using a self-sovereign, decentralized identity wallet provided by the city, she is able to secure microloans without compromising her privacy, a triumph for both dignity and autonomy.
As for how these core values transform into the design principles of decentralized identity wallets: that’s the next question to address.
From Values to Design PrinciplesThe translation of the core values of autonomy, dignity, and trust into concrete design principles shapes every aspect of trust architectures I build and guides me to specific technical choices:
Cryptographically secure, self-certifying identifiers that operate independently of central authorities. Local or collaborative key generation and management to keep control in users’ hands. Peer-to-peer protocols that resist centralized rent-seeking and walled gardens. Offline-first capabilities to prevent connectivity from becoming a point of coercion. Data minimization by default. Choices for elision and redaction to control what individuals share. Cryptographic selective disclosure to prevent unwanted correlation and tracking. Revocable permissions to ensure users retain ongoing control over their information. Zero-knowledge proofs or other systems that can balance privacy and accountability without enabling bad actors. Decentralized architectures, not as an ideological preference, but as a practical necessity.The importance of these protections isn’t theoretical. My work examining sensitive data — including wellness, educational credentials, financial transactions, and identity documentation — has revealed how seemingly benign information can threaten human rights when misused. Health data can enable discrimination or coercion. Educational records can create permanent, unchangeable markers that limit opportunities. Financial and identity data can be weaponized to exploit or disenfranchise individuals.
A values-driven design can therefore be seen as not just an abstract focus on ideals such as autonomy, but protection against real-world harms. The rights to be forgotten, to correct errors, and to recover from systemic or administrative injustices ensure fairness in digital interactions. The ability for an individual to selectively share aspects of their identity protects from being reduced to digital records or confined to singular contexts.
From Design Principles to EducationImplementing human-centric design patterns reveals another challenge: helping developers to understand not just the technical complexity, but the human purpose behind each design choice. Developers must grasp not only how their systems operate but they must think critically about why their design decisions matter for privacy, autonomy, and dignity.
While technical resources such as documentation and tutorials are indispensable for this education, true progress depends on fostering a compasionate culture where developers internalize value-driven imperatives. This has led me to prioritize the cultivation of decentralized developer ecosystems rooted in collaboration, open development standards, and shared learning. I’ve done this through a variety of means:
Workshops that convene developers, policymakers, and advocates to share insights, collaborate, and explore innovative approaches. Hackathons and Sprints addressing pressing challenges in digital trust, enabling participants to co-create solutions in hands-on environments. Regular Developer Meetups for discussing current challenges, sharing practical experiences, and aligning on future roadmaps. Peer Review and Collaboration Forums to ensure transparency, accountability, and robust feedback in the development processes. Cross-Organization Coordination to facilitate collaborative projects, share resources, and distribute financial and time-related investments such as security reviews. Ecosystem Building to design decentralized solutions that balance individual empowerment with collective benefit, ensuring that all contributors — users, developers, and communities — derive meaningful value and that mutual respect is cultivated through shared goals and open participation. Mentorship Programs to guide emerging developers in adopting values-driven approaches, fostering ethical practices from the outset of their careers. Advocacy Efforts that include collaborating with policymakers and regulators to define a techno-social contract that upholds human dignity, ensures equitable and compassionate digital rights, and protects the interests of the vulnerable.With this decentralized, collaborative approach to education, no single entity controls the evolution of these technologies. Instead, innovation is fostered across a diverse network of developers, building resilience into these systems and ensuring that solutions remain adaptable, inclusive, and accessible. This cooperative spirit reflects the very principles of autonomy, compassion, and inclusivity that underpin trustworthy digital systems.
From Education to ImplementationAs communities evolve from educational groups to implementation groups, forums and discussions continue to expand the community and allow us to address the broader societal implications of technical choices. Foundational principles should follow.
The Ten Principles of Self-Sovereign Identity is an example of a set of foundational principles that directly evolved from discussion at an educational workshop (RWOT2). The Gordian Principles and Privacy by Demand are other examples of core principles that evolved out of earlier discussions. Principles such as these form a bedrock for the values we will work to embed in actual implementations.
Code reviews and project evaluations should then include these principles — and more generally ethical alignment — as a key criterion. They’re not just about technical correctness! By embedding values into every stage of development, we ensure that systems are designed to empower individuals, not exploit them.
How can we manage the critical balance between transparency for accountability and privacy for individuals? How do we address power dynamics and ensure systems protect the rights of the vulnerable while holding powerful entities accountable? Ultimately, how do we prioritize both user autonomy and security in decisions around data storage, key management, or cryptographic algorithms? These are questions that should both arise and be addressed when considering a full education-to-implementation pipeline that is based on collaboration and the consideration of values.
Ultimately, implementing systems that respect dignity and autonomy demands a new kind of techno-social contract. This contract must bridge multiple realms:
The technical capabilities that make solutions possible. The cultural shifts that make them acceptable. The economic incentives that make them sustainable. The political will that makes them viable. The contractual & legislative agreements that makes them durable.This comprehensive approach will serve both individual autonomy and our collective commons.
By ensuring that digital trust and human dignity remain at the core of technological progress, we build systems that serve as a foundation for a more equitable, humane, and resilient digital future. The result is implementations that transcend technical excellence by instilling a sense of stewardship among developers. They become not just the creators of secure systems but also champions of the communities these systems serve.
From Implementation to DeploymentAny framework to support values such as autonomy, dignity, and trust must be holistic in its approach.
Technical standards and specifications must harmonize with cultural norms and social expectations. Economic models must simultaneously foster individual resilience and collective benefits, ensuring that privacy and autonomy remain accessible to everyone, and don’t become luxuries available only to the wealthy. Cultural norms and legislative efforts must go beyond surface-level privacy protections, addressing both the technical realities and human needs at stake. Most importantly, technical and political discourse must evolve to recognize digital rights as fundamental human rights. This paradigm shift would enable policies that support compassionate decentralized approaches while holding powerful actors accountable to the communities they serve.Nurturing the collaborative ecosystems plays a central role in this transformation. We must foster cultures of ethical awareness not just among developers but across society. This means supporting implementers and maintainers who understand not just the “how” of our systems, but the “why”. It means engaging leaders who grasp both technical constraints and human needs and creating sustainable economic models that reward contributions to the commons while protecting individual rights.
Legal deployment has always been one of the trickiest challenges in popularizing a system that supports individual autonomy, but the concept of Principal Authority presents a promising foundation, courtesy of Wyoming’s digital identity law. It goes beyond the traditional frameworks of property and contract law, which, while useful, are insufficient in addressing the unique challenges of digital identity.
Property law focuses on ownership and control and contract law governs agreements between parties, but neither fully captures the dynamic, relational nature of digital representations or the need for individual agency in decentralized systems. Principal Authority, grounded in Agency Law, functions much like the relationship between a principal and an agent in traditional legal contexts. For instance, just as an agent (like a lawyer or real estate agent) acts on behalf of a principal while preserving the principal’s control, Wyoming’s digital identity law ensures that individuals retain ultimate authority over any actions or representations made on their behalf in the digital space. This legal framework acknowledges human agency — not mere ownership or contractual consent — as the primary source of legitimate authority. The result is a modern recognition of individual sovereignty, and therefore autonomy, that still fosters collaboration and commerce in the increasingly interconnected digital realm.
But, even if Principal Authority does prove a useful tool, it’s just one tool in a whole toolkit that will be necessarily to successfully deploy rights-supporting software into society.
ConclusionMy responsibility as a trust architect is not simply to build systems that work, but to build systems that work for humanity. This requires a steadfast commitment to values, a willingness to navigate difficult trade-offs, and a relentless focus on aligning design principles with human needs.
The technical challenges of implementing values-driven design are significant, but they’re challenges worth solving. When we build systems that respect human rights and dignity, we create digital spaces that enhance rather than diminish human flourishing.
As developers, policy makers, or advocates, we hold the power to embed human values into every line of code, every standard, and every policy. As we build tomorrow’s digital ecosystems, we must therefore ask: What can I do to make trust and dignity the foundation of our systems?
To answer that question in a positive way will ultimately require a multi-stakeholder effort where technologists, policy makers, and civil society collaborate to uphold principles of equity, inclusion, and transparency in all aspects of digital architecture, down the entire linked chain from values to design to education to implementation to deployment.
I hope you’ll be part of that undertaking.
Appendix 1: Principles of Dignity, Autonomy, and Trust in Digital SystemsWhile working on this article, I put together my own principles for dignity, autonomy, and trust in digital systems. As with my self-sovereign principles of a decade ago, I am offering these up for discussion in the community.
1. Human Dignity. Design systems that prioritize and respect the inherent dignity of every individual. Embed privacy protections, minimize data collection, and provide clear, revocable consent mechanisms that align with user empowerment. Protect individuals from harm while fostering compassionate digital environments that promote trust, human flourishing, and technological progress aligned with human-centric values, actively considering potential societal impacts and unintended consequences. 2. Autonomy & Self-Determination: Empower individuals to control their digital identities and make decisions free from coercion or undue influence. Enable them to manage their interactions, transact freely, preserve their sovereignty, act as peers not petitioners, and assert their rights through decentralized, compassionate, user-controlled systems. 3. Privacy by Design (& Default): Embed robust privacy protections into every system, implementing data minimization, selective disclosure, anti-correlation, and cryptographic safeguards as default practices. This ensures that users retain control over their information and remain shielded from tracking, correlation, and coercion. 4. Resilience Against Exploitation: Architect systems to withstand adversarial threats and systemic vulnerabilities. Leverage decentralization, cryptographic protections, and offline-first capabilities to empower users even in hostile and adversarial environments and to ensure autonomy remains intact under pressure. 5. Progressive Trust: Design systems that reflect the natural evolution of trust, enabling selective and intentional information sharing. Foster trust gradually through mutual engagement, avoiding premature commitments, unnecessary reliance on intermediaries, or imposed full disclosure. 6. Transparency & Accountability: Hold powerful institutions accountable while safeguarding individual privacy. Balance transparency with confidentiality to mitigate power imbalances, protect the vulnerable, and ensure justice and fairness in digital interactions. Ensure that innovation and system development prioritize fairness and compassionate considerations, holding powerful institutions accountable for societal impacts. 7. Interoperability: Foster systems that are interoperable across cultural, legal, and jurisdictional boundaries. Promote inclusivity by prioritizing open standards, decentralized infrastructures, and accessible tools that serve diverse communities while avoiding exclusivity or centralized gatekeeping. 8. Adaptive Design: Incorporate insights from Living Systems Theory, Ostrom’s Commons, and other governance and design models to build architectures that are dynamic, resilient, and capable of evolving alongside societal and technological changes. Emphasize adaptability through iterative growth, collective stewardship, and interoperability, balancing stability with flexibility to support sustainable and inclusive digital ecosystems. 9. A Techno-Social Contract: Bridge technical capabilities with cultural, economic, and legislative frameworks to create a sustainable, human and civil rights-preserving digital ecosystem. Recognize digital rights as fundamental human rights and align systems with shared values of autonomy, dignity, and collective benefit. 10. Ethics: Cultivate a culture of ethical awareness, critical thinking, and collaboration among developers, policymakers, and users. Ensure technical decisions align with principles of trust and dignity by embedding education, mentorship, and a commitment to shared responsibility in the development process. Encourage innovation that is mindful of societal impacts, fostering a development ethos that prioritizes responsibility and safeguards against unintended consequences. Appendix 2: Use Cases for Values DesignsValues affect all of my designs. Following is some discussion of how it’s influenced my work on self-sovereign identity and progressive trust.
Self-Sovereign IdentityThe conviction that technical designs must be built on human values came into sharp focus for me in 2016 when I authored the 10 Principles of Self-Sovereign Identity. These principles were not born from technical specifications alone but from a deep commitment to dignity, autonomy, and human rights. Over time, those values have guided the development of technologies such as Decentralized Identifiers (DIDs), Verifiable Credentials (VCs), and the DIDComm protocol for secure, private communication. They have also influenced broader thinking around cryptographic digital assets such as Bitcoin. I have come to see these values not as abstract ideals but as the very foundation of trust itself: principles that must underpin every digital system we create.
My principles of Self-Sovereign Identity also had a strong historical basis: they were built on a deep historical and philosophical foundation. The concept of sovereignty has evolved over centuries — from feudal lords to city-states to nations — consistently reflecting a balance between autonomy and interconnection. When I wrote about the principle of “Control”, it was not about advocating absolute dominion but about framing sovereignty as the right to individual agency and prosperity, much like medieval cities, which preserved their independence while flourishing within broader networks of trade and diplomacy.
This understanding was deeply influenced by Living Systems Theory, which shows how every entity maintains its autonomy through selective boundaries while remaining part of a larger ecosystem. Just as a cell’s membrane allows it to control what passes in and out while still participating in the larger organism, digital identity must enable both individual autonomy and collective participation. This biological metaphor directly informed principles such as “Existence” and “Persistence,” which recognize that identity must be long-lived but also able to interact with its environment, and “Access” and “Portability”, which define how identity information flows across boundaries.
The principles also reflect Ostrom’s insights about managing common resources as well as feminist perspectives on sovereignty that emphasize agency over control. When I wrote about the principles of “Consent” and “Protection”, I was describing the selective permeability of these digital boundaries—not walls that isolate, but membranes that enable controlled interaction. “Interoperability” and “Minimization” similarly emerged from understanding how sovereign entities must interact while maintaining their independence and protecting their core rights.
These concepts culminate in the final SSI Principles such as “Transparency,” which balances individual autonomy with collective needs, and “Portability,” which ensures that identities can move and evolve just as living systems do. Each principle reflects this interplay between values and technical implementation, creating a framework where digital sovereignty serves human dignity. They weren’t meant to be an endpoint but rather a starting point for an evolving discussion about sovereignty in the digital age — one that continues to guide our work as we push the boundaries of what’s possible in digital identity, ensuring our innovations prioritize human needs rather than subordinating them to technology.
The technical complexity required to implement such systems is significant, but it serves a deeply human purpose: the ability to build autonomy and trust.
Progressive TrustTrust is not static; it evolves over time — a concept I describe as progressive trust. This principle reflects how trust naturally develops between people and organizations, both in the physical and digital worlds. Relationships are built incrementally, through selective and intentional disclosures, rather than being imposed upfront or dictated solely by third-party intermediaries. This gradual evolution is essential for fostering genuine connections while mitigating risks.
I discovered this concept through years of observing how people actually build relationships. For instance, when meeting someone at a conference, we don’t immediately share our life story. Instead, we begin with small exchanges, revealing more information as comfort, context, and mutual understanding grow. Digital systems must mirror this natural evolution of trust, creating environments that respect psychological needs and empower individual agency.
A well-designed system transforms these ideas about progressive trust into deployable systems by enabling users to disclose only what is necessary at each stage, while retaining the ability to refine or revoke permissions as relationships deepen, change, or dissolve. This flexibility demands advanced technical solutions, such as:
Sophisticated cryptographic protocols that enable selective and intentional disclosure. Relationship-specific identifiers to ensure contextual privacy. Mechanisms to prevent unwanted tracking or correlation. Tools that balance transparency with security, safeguarding trust while avoiding vulnerabilities that could undermine it.The technical complexity required to implement such systems is significant, but it serves a deeply human purpose: enabling individuals to build trust incrementally, naturally, and on their own terms.
Knowing the values we are aligning with from the start helps to define this sort, even (as with progressive trust) when it’s hard. The result is an architecture that not only reflects the organic nature of human relationships but also upholds autonomy, fosters confidence, and protects against coercion or exploitation.
Arbiter Nodes Provide Time-Based Services and Dispute Resolution, Allowing Bitcoin to Remain on Mainnet While Tapping into Smart Contracts Across EVM Blockchains
Building on its vision of fully decentralized financial services powered by Bitcoin, Elastos today announced the public beta release of its Arbiter Network for the Bitcoin-Elastos Layer 2 protocol, BeL2. This step marks a major milestone in the growth of the BTCFi ecosystem, making it possible to secure BTC-backed loans, stablecoins, and other advanced smart contract solutions without ever relocating Bitcoin off the main network.
Developed by Elastos (ELA)—an early mover in SmartWeb technologies—the BeL2 protocol sets up a trustless clearing network that sends proofs rather than assets. Through these cryptographic proofs, smart contracts on EVM-compatible blockchains can verify that BTC remains locked on Bitcoin’s mainnet and use it as collateral. With the introduction of Arbiter nodes, developers and users now gain access to time-based transaction oversight and decentralized dispute resolution—features that clear the path for a new wave of decentralized finance built on Bitcoin’s security.
“The Arbiter Network is the final piece in our BeL2 infrastructure puzzle,” said Sasha Mitchell, Head of Operations at Elastos. “With Arbiter nodes providing trustless oversight and time-based services, we can offer a fully decentralized BTC finance platform—one that builds on Bitcoin’s resilience without depending on centralized custodians.”
How the BeL2 Arbiter Network WorksUnder the surface, BeL2 keeps Bitcoin anchored to its original chain while enabling a range of financial operations elsewhere. First, users lock their BTC using dedicated mainnet scripts. This non-custodial model ensures that Bitcoin never needs to be wrapped or moved, preserving its core security properties and owner independence. Once locked, Zero-Knowledge Proofs (ZKPs) affirm the collateral status, allowing external networks to confirm how much BTC is involved without revealing private transactional data.
These ZKPs pass through a decentralized oracle service that conveys proof details—rather than assets—into EVM-based smart contracts. By transferring cryptographic confirmations instead of tokens, BeL2 avoids the pitfalls linked to wrapped BTC. Meanwhile, the newly released Arbiter Network oversees loan terms, coordinates time-based tasks, and resolves any disputes. Arbiter nodes pledge Elastos (ELA)—a reserve currency merge-mined with Bitcoin with up to 50% of its security—to uphold network reliability, earning ELA and BTC fees in return for their role.
This approach offers distinct advantages. Users can be confident that their secured Bitcoin stays under their control on its original chain. Arbiter nodes jointly validate transactions in a decentralized manner, creating a fair environment for everyone. Because the proofs rely on Zero-Knowledge methods, the system preserves strong security and privacy. By supporting EVM-based smart contracts, BeL2 also unlocks wide-ranging DeFi possibilities—spanning simple lending scenarios to more advanced stablecoin setups and beyond.
BeL2 Arbiter Beta Release DetailsThe beta release of the Arbiter Network will progress in stages, starting with reduced collateral limits to maintain stability and gather input from initial participants. During this phase, Arbiters can stake a small amount of ELA or ELA-based NFTs, ensuring enough protection to support fair procedures while the network is put to the test. At first, rewards will be distributed in ELA, with BTC fee structures planned for future updates. A user-friendly dispute resolution portal lets Arbiters track events, approve or challenge transactions, and honor time-based constraints for overall reliability.
“By blending Bitcoin’s security with Elastos’ scalable foundations, we’re establishing a new financial model—a decentralized bank of sorts, powered by code and cryptography,” said Sasha Mitchell. “Our broader aim is a smooth, global financial web that remains anchored to Bitcoin’s trust.”
Broader Implications for BTCFiBy directly connecting each DeFi transaction to Bitcoin’s mainnet, BeL2 removes the need for wrapped BTC setups, lessening complexity and eliminating custodial points. This breakthrough supports various use cases: non-custodial loans, stablecoins that can be redeemed at will, and decentralized trading markets offering both spot and derivative BTC products. With Arbiter nodes providing reliable governance and resolving disputes, the broader BeL2 sphere gains a unified system for organizing multi-party transactions with fairness—an advancement that places Bitcoin at the core of the future’s decentralized financial world.
Additional Information Learn More about BeL2 Join the Arbiter Beta Contact info@elastos.org for partnership inquiries or media requests. About ElastosElastos is a SmartWeb ecosystem builder focused on enabling decentralized application creation and cross-chain connectivity. Built on top of Bitcoin merge-mining, Elastos relies on the security of the world’s largest public blockchain and extends it with additional layers. The introduction of BeL2 and its Arbiter Network marks Elastos’ latest effort to advance a more open, clear, and trustless global financial system.
Here’s a picture of my customer journey with United Airlines:
I’m also a lifetime member of the United Club, thanks to my wife’s wise decision in 1990 to get us both in on that short-lived deal.
Premier Platinum privileges include up to three checked bags, default seating in Economy Plus (more legroom than in the rest of Economy), Premium lines at the ticket counter and Security, and boarding in Group One. There are more privileged castes, but this one is a serious tie-breaker against other airlines. Also, in all our decades of flying with United, we have no bad stories to tell, and plenty of good ones.
But now we’re mostly based in Bloomington, Indiana, so Indianapolis (IND) is our main airport. (And it’s terrific. We recommend it highly.) It is also not a hub for any of the airlines. The airline with the most flights connecting to IND is American, and we’ve used them. I joined their frequent flier program, got their app, and started racking up miles with them too.
Until this came in the mail today:
Everything they list is something I don’t want to do. I’d rather just accumulate the miles. But I can’t, unless I choose one of the annoyances above, or book a flight in the next three months.
So my customer journey with American is now derailed.
There should be better ways for customers and companies to have journeys together.
Here is one idea: having my United status and club membership mean something to other airlines. Because those are credentials. They say something about me as a potential passenger. It would be nice also if what I carry, as an independent customer, is a set of verifiable preferences—such as that I always prefer a window seat, never tow a rolling bag on board (I only have a backpack), and am willing to change seats so a family can sit together. Little things that might matter.
I bring all this up because fixing “loyalty” programs shouldn’t be left up only to the sellers of the world. They’ll all do their fixes differently, and they’ll remain deaf to good input that can only come from independent customers with helpful tools of their own.
Developing those solutions to the loyalty problem is one of our callings at ProjectVRM. I also know some that are in the works. Stay tuned.
ELA exists to bring together the strongest elements of Bitcoin’s security approach with a flexible blockchain structure. By making use of merge mining, ELA draws on Bitcoin’s hashrate while enforcing a limit of 28.22 million tokens. In this way, ELA adopts Bitcoin’s protective strength at a small fraction of its energy usage, shaping ELA into a “Bitcoin-Secured Reserve Asset.”
Security and scarcity define a blockchain’s worth. Bitcoin’s remarkable success is built on its proof-of-work protocol and a cap of 21 million coins. ELA follows that model: it anchors its proof-of-work to Bitcoin’s extensive hashrate, while its own final supply remains fixed. This approach builds confidence in ELA’s underlying economy. Bitcoin is almost impossible to compromise, and by extension, ELA gains that same resistance.
TeamELA.org, built by Infinity under team member Sasha Mitchell, is an educational hub for ELA which brings together merge-mining metrics, supply and value graphs, educational animations and exchange and wallet links for ELA holders—turning complex ideas into a clear set of tools. TeamELA.org connects to the Elastos Explorer and Minerstat in real time, revealing how ELA’s share of Bitcoin’s hashrate compares to its overall supply. By adding CoinGecko‘s price data, the site shows how ELA’s security links with tangible value, giving users a clear metric for judging its potential.
It also serves as a straightforward route to ELA on exchanges—both centralized (Coinbase, KuCoin, Gate.io, Huobi) and decentralized (Uniswap, Chainge Finance, Glide Finance)—eliminating the need to search for obscure markets. Meanwhile, quick wallet downloads for iOS or Android and a short “How to stake ELA” guide help newcomers store or stake their tokens within minutes.
Beyond these essentials, TeamELA.org offers interactive visuals to demonstrate how Bitcoin’s proof-of-work runs hand-in-hand with ELA, emphasizing the efficiency of merge mining. A supply timeline highlights how ELA issuance winds down by 2105, capped at 28.22 million. There’s also a value calculator, outlining how a slice of Bitcoin’s mining income might feed into ELA’s core worth. Together, these features deliver a complete view of ELA’s security, supply, and utility in one cohesive online hub.
Behind the scenes, TeamELA.org relies on custom API hooks for data retrieval, frequently pull stats, such as Elastos and Bitcoins latest block hashrate, price, and supply figures stay current. Animations illustrate how ELA’s protocol uses Bitcoin’s calculations, emphasizing that ELA benefits from Bitcoin’s security with no added energy load.
Bitcoin-Level Security: By gaining a noteworthy share of Bitcoin’s Exahashes per second, ELA surpasses other altcoins that operate on smaller pools. TeamELA.org computes on the fly to show ELA’s strong security ratio. Fixed Supply: Topped at 28.22 million tokens, ELA reflects Bitcoin’s scarcity yet sets its own limit. A clear halving roadmap on the Supply page details its release steps—another reason some see ELA as akin to “digital gold.” Reduced Energy Use: Merge mining calls for no extra machinery. Bitcoin miners can solve ELA blocks alongside Bitcoin blocks with the same proof-of-work, lowering power demands while boosting overall safety. Decentralized Ecosystem: Elastos began as a wide-ranging Web3 platform, offering decentralized apps and services that draw on ELA’s heightened protection. As the network grows, ELA’s significance as the “fuel” for dApps rises, backed by Bitcoin’s level of assurance. Single-Stop Convenience: The portal merges stats, how-to resources, and integrated trading/wallet options in one location. New arrivals only need this one site to discover, acquire, and manage ELA.OPEN-SOURCE CODE
In line with decentralized values, TeamELA.org’s repository is open to all. Anyone may review how the platform sources data from the Elastos Explorer or tracks price details from CoinGecko. Developers are free to examine, refine, or enhance each element. Providing open code encourages transparency:
ELA’s primary strength grows from pairing Bitcoin’s extensive hashrate with a strict supply limit as per Satoshis merge-mining vision shared here and here. This grants ELA the benefits of reliable proof-of-work at a lower energy cost—something uncommon among altcoins. By visiting TeamELA.org, users gain a clear rundown of ELA: side-by-side hashrate insights, supply data, paths to buy, and options to stake or store tokens.
Plenty of projects claim decentralization, but ELA directly connects with Bitcoin’s security approach, widely regarded as the most tested in the industry, supporting a network that remains resource-friendly and strong. This transparent method builds trust every step of the way, capturing ELA’s core purpose: harness Bitcoin’s raw power, maintain a fixed token count, and invite the public to participate in a new era of decentralized applications anchored by a proven proof-of-work foundation. What’s more, TeamELA.org is a portal to support CoinTelegraph in their upcoming research report on ELA, passed by the Cyber Republic in Proposal 176. Did you enjoy this article? To learn more, follow Infinity for the latest updates here!
January 2025
DIF Website | DIF Mailing Lists | Meeting Recording Archive
Table of contents Decentralized Identity Foundation News; 2. Working Group Updates; 3. Open Groups; 4. Announcements at DIF; 5. Community Events; 6. DIF Community; 7. Get involved! Join DIF 🚀 Decentralized Identity Foundation News DIF Labs Launches Beta CohortDIF's commitment to accelerating decentralized identity innovation takes a major step forward with the launch of DIF Labs' Beta Cohort. This new initiative brings together leading projects in Bitcoin Ordinals, Linked Claims, and privacy-preserving verification through VerAnon. Learn how DIF Labs is transforming how decentralized identity solutions are built, tested and scaled at DIF Labs: DIF Launches Beta Cohort.
Thank you to DIF Labs Chairs Andor Kesselman, Ankur Banerjee, and Daniel Thompson-Yvetot for their tremendous stewardship in building the Labs community, which will be a major focus for DIF in 2025!
Official Adoption of BBS Blind Signatures and Pseudonym Specifications by the CFRGMajor progress in privacy-preserving credentials as BBS Blind Signatures and BBS Pseudonyms specifications are officially adopted by the Crypto Forum Research Group (CFRG). These developments bring us closer to standardized, privacy-protecting digital credentials. Read about BBS and how to get involved at BBS: Where Proof Meets Privacy.
DIF Technical Leaders Engage Korean StudentsDIF extends its educational outreach in Asia as Markus Sabadello and Kyoungchul Park delivered an engaging session on decentralized identity technologies at Seoul's MegaStudy academy. The lecture covered DIDs, VCs, and digital wallets, demonstrating DIF's commitment to nurturing the next generation of identity technologists. Read the full story at DIF Technical Leaders Engage Korean Students.
🛠️ Working Group Updates 📓 DID Methods Working GroupWelcomed new co-chairs Jonathan Rayback and Matt McKinney. Made progress on selection criteria for DID methods, working to refine criteria list and gather method proposals. Group developing template for proposals and planning process to evaluate DID methods. Next meeting set for January 15th to continue work.
DID Methods Working Group meets bi-weekly at 9am PT/ noon ET/ 6pm CET Wednesdays
💡Identifiers and Discovery Working GroupDiscussed DID traits specification and potential IPFS improvements. Team explored using IPFS as a DID method and how to enhance file identification. Discussed document controller properties and DID method comparisons.
Advanced work on DID Web VH specification, focusing on key rotation, version management and revocation approaches. Discussed version ID formats and query parameters. Team preparing to finalize v0.5 of specification.
Identifiers and Discovery meets bi-weekly at 11am PT/ 2pmET/ 8pm CET Mondays
🪪 Claims & Credentials Working GroupPlanning for January 28th session on age verification and credentials standardization. The session will cover proof of age schema, data models, and practical applications like age-restricted content, senior services, and substance purchasing verification. Register here to attend
The Credential Schemas work item meets bi-weekly at 10am PT / 1pm ET / 7pm CET Tuesdays
🔐 Applied Crypto Working GroupBBS+ specs CFRG adoption call succeeded! Exploring range proofs and verifiable encryption concepts.
The DIF Crypto - BBS work item meets weekly at 11am PT/2pm ET /8pm CET Mondays
🧪 DIF Labs Working GroupBeta cohort projects progressing well - discussed EUDI wallet's new payment features, the Linked Trust project development, and connecting mentors with specific expertise to projects. Planning for mid-February presentations.
The Credential Schemas work item meets monthly on the 3rd Tuesday at 8am PT / 11am ET / 5pm CET
If you are interested in participating in any of the Working Groups highlighted above, or any of DIF's other Working Groups, please click join DIF.
📖 Open Groups at DIF DIDComm User GroupReviewed outcomes from recent DIDComm interopathon, though participation was limited to two organizations. Identified issues with DIDCOMM Demo around multi-key implementation and JWK support. Proposed moving to bi-weekly meetings with alternating times to accommodate different time zones. Planning another interopathon for Q1 2025.
Meetings take place weekly on Mondays at noon PST. Click here for more details
Veramo User GroupMeetings take place weekly on Thursdays, alternating between Noon EST / 18.00 CET and 09.00 EST / 15.00 CET. Click here for more details
🌏 APAC/ASEAN Discussion GroupThe DIF APAC call takes place Monthly on the 4th Thursday of the month. Please see the DIF calendar for updated timing.
🌍 DIF AfricaMeetings take place Monthly on the 3rd Wednesday at 1pm SAST. Click here for more details
🌍 DIF Japan SIGFocused on interoperability discussions from recent Taipei conference. Covered integration with existing systems and digital identity platform shifts. Discussed Taiwan's approach to decentralized models.
Meetings take place on the last Friday of each month 8am JST. Click here for more details
🌍 DIF Hospitality & Travel SIGMade progress on standardizing data formats, internationalization, and location services. Working on documentation for C-suite audiences and implementation guides. Developing approaches for currency handling and regional variations in user profiles.
Meetings take place weekly on Thursdays at 10am EST. Click here for more details
📢 Announcements at DIF Proof of Age Workshop (Webinar)Join us on January 28th at 10am PT for an important discussion on proof of age solutions and age-related verifiable credentials. The session will explore DIF's CC working group's initiatives around implementing privacy-preserving age verification for various use cases including:
Age-related discounts and benefits Minor protection and age-gating Access control for age-restricted products and services Coordinating with related industry effortsThis collaborative session aims to bring together stakeholders to share insights and align efforts in building standardized, privacy-respecting age verification solutions.
🗓️ ️DIF Community DIDComm: Securing Industry 4.0 CommunicationsRead Dr. Carsten Stöcker's insightful analysis of how DIDComm can address critical security vulnerabilities exposed by the recent Salt Typhoon attacks. Learn how DIDComm's end-to-end encryption and perfect forward secrecy enable secure machine-to-machine and business-to-business communications for Industry 4.0. Read more
👉Are you a DIF member with news to share? Email us at communication@identity.foundation with details.
New Member Orientations
If you are new to DIF join us for our upcoming new member orientations. Please subscribe to DIF’s eventbrite for upcoming notifications on orientations and events.
🆔 Join DIF!If you would like to get in touch with us or become a member of the DIF community, please visit our website or follow our channels:
Follow us on Twitter/X
Join us on GitHub
Subscribe on YouTube
🔍
Read the DIF blog
New Member OrientationsIf you are new to DIF join us for our upcoming new member orientations. Find more information on DIF’s slack or contact us at community@identity.foundation if you need more information.
The EU’s Digital Operational Resilience Act (DORA) is set to take effect in January 2025. its aim is to ensure that companies and institutions active in the EU financial sector are prepared to withstand operational disruption and cyberattacks.
DORA will have significant implications for financial services organisations as, under DORA’s mandate, financial organisation and any of their IT, cybersecurity or identity management providers must meet the stringent guidelines of DORA by January 17, 2025 in order to mitigate ICT risk across the financial landscape.
Many Kantara Initiative members and assurance program clients provide IT or cyber security within an EU financial organisation. Most will be gearing up to meet the requirements of DORA by the January deadline. Some, however, still find themselves struggling to set everything in line now that there is just a month to go – and much still to do.
In our regular communications with clients, we find that many still have questions about how DORA might impact organisations based outside the EU.
In short: while DORA primarily applies to organisations delivering services to customers within the EU, the framework’s security requirements also apply to third-party ICT service providers irrespective of where that service provider is located. This means that any third-party ICT service (including data analytics, payments, data centre services or cloud software) provided from outside the EU, may still fall into scope if it is used by an EU financial entity. Likewise, any US-based financial institution with subsidiaries or suppliers in the EU should also be aware of the implications.
What is DORA?Put simply it is a regulatory initiative imposed by the EU to harmonise Information and Communication Technology (ICT) risk requirements in the financial services industry across Europe. Its objective is to ensure a more resilient and robust risk position for all financial services providers. The result is a detailed and comprehensive framework based on the pillars of:
ICT risk management and governance, incident reporting, resilience testing, third-party risk and information sharing.As a community of identity professionals, we fully support any regulation intended to enhance the digital operational resilience of financial entities so they can prevent and mitigate, respond and recover from cyber threats and all types of ICT related disruptions. In many ways DORA simply puts into law standards and guidelines that financial service providers should already follow as best practice.
Resilience is the key word. Many US based companies will be well aware of the Federal Reserve System’s “Interagency Paper on Sound Practices to Strengthen Operational Resilience” which was published in November 2020. Much of the DORA framework mirrors Operational Risk Management, Business Continuity and Third-Party Risk Management, as identified in the Interagency Paper. The management of secure and resilient information systems underpins operational resilience. Without these measures, financial institutions will remain vulnerable to the risk of disruption with far-reaching consequences on the financial industry.
DORA overlaps in several ways with other legislative frameworks, such as the Directive on the Resilience of Critical Entities (CER) and the Directive on Network and Information Security (NIS2). Some organisations may find they need to be compliant with both DORA and NIS2.
Who must comply with the Digital Operational Resilience Act (DORA)?Organisations that employ more than 10 people and have a turnover and/or annual balance sheet total that exceeds EUR €2 million are required to be compliant with DORA. The Regulation will apply to the following entities:
credit institutions payment institutions, including those exempted under Directive (EU) 2015/2366 account information service providers electronic money institutions, including those exempted under Directive 2009/110/EC investment firms crypto-asset service providers as authorised under a regulation of the European Parliament and of the Council on Markets in Crypto-Assets and amending regulations: (EU) No 1093/2010 and (EU) No 1095/2010 and Directives 2013/36/EU and (EU) 2019/1937 (‘the Regulation on markets in crypto-assets’) and issuers of asset-referenced tokens central securities depositories central counterparties trading venues trade repositories managers of alternative investment funds management companies data reporting service providers insurance and reinsurance undertakings insurance intermediaries, reinsurance intermediaries and ancillary insurance intermediaries institutions for occupational retirement provision credit rating agencies administrators of critical benchmarks crowdfunding service providers securitisation repositories ICT third-party service providers Are there penalties for non-compliance?There are significant penalties for non-compliance and The European Supervisory Authorities (ESAs) have the power to impose fines on firms that violate DORA’s requirements.
Companies may be fined up to 2% of their total annual turnover globally OR up to 1% of the average daily turnover globally Individuals can face a fine of up to €1,000,000 Third-party ICT service providers that have been designated to be critical by the ESAs could see fines of up to €500,000 for individuals and €5,000,000 for companiesSince reporting is a specific requirement of the legislation, a financial services company may also incur significant fines if it fails to report a major ICT-related incident or threat.
How can Kantara Initiative help?It is part of our commitment to our members to keep abreast of the various standards, legislation and directives that may apply across our community. We are always keen to help members navigate the ever-changing landscape. If you are concerned how DORA may affect you, please contact us to discuss.
Next month we will be talking about how NIS2 affects the Essential Entities (EE) of Digital Infrastructure and the Important Entities of (IE) of Digital Providers.
The post Are you ready for the new EU DORA regulations? appeared first on Kantara Initiative.
The post Third OpenID4VP Implementer’s Draft Approved first appeared on OpenID Foundation.
The FIDO Alliance’s Seoul Public Seminar was held on December 10, 2024, at the SK Telecom Pangyo Office. The theme for this milestone event was Unlocking a Secure Tomorrow with Passkeys and the event attracted nearly 200 attendees. The seminar gave professionals a chance to share the latest developments and implementations of simpler and stronger online authentication technology with passkeys.
Watch the Recap VideoThe seminar featured a dynamic mix of global and local case studies and offered a comprehensive overview of Passkey/FIDO and FDO (FIDO Device Onboard) implementations. Here are some key highlights:
FIDO Alliance Update: Andrew Shikiar (Executive Director & CEO of the FIDO Alliance) announced the launch of Passkey Central, a resource hub offering guidance on implementing passkeys for consumer sign-ins. The site is now available in Korean, Japanese, and English. What’s New with Passkeys on Google Platforms?: Eiji Kitamura (Developer Advocate at Google) discussed recent passkey advancements, including Android’s Credential Manager API and broader passkey support on Google platforms. From Passwords to Passkeys: The TikTok Passkey Journey: XK (Sean) Liu (Technical Program Manager at TikTok) shared how the TikTok platform adopted passkeys for both enterprise and consumer services. Secure Smart TV Authentication with Passkeys: Min Hyung Lee (Leader of the VD Business Security Lab at Samsung Electronics) demonstrated how passkeys enhance smart TV user authentication and outlined the future for this technology. FIDO in eCommerce: Mercari’s Passkey Journey: Naohisa Ichihara (CISO at Mercari) detailed the company’s motivations, challenges, and strategies for mitigating phishing risks through passkey adoption within the C2C marketplace.The 2024 Seoul Public Seminar also featured an exciting and interactive segment: the FIDO Quiz Show. Designed to engage attendees while reinforcing key learnings, the quiz brought an additional layer of fun and competitiveness to the event.
How it worked:
Session Pop Quizzes: After each seminar session, key takeaways were tested through pop quizzes. Attendees who answered correctly were rewarded with FIDO Security Keys, generously supported by Yubico.
Real-Time Quiz Show: At the end of the event, a live quiz show engaged all attendees. By scanning a QR code, participants could join in and compete for prizes. Eunji Na from TTA emerged as the top scorer and won a Samsung Galaxy Smartphone!
Think you know FIDO Alliance and passkeys? Test your knowledge with the same 15 quiz questions (in Korean) by scanning the QR code in the image below.
The seminar gained significant local media attention from outlets such as IT Daily, DailySecu, Byline Networks, Datanet, BoanNews, eDaily, and Korea Economic Daily. Coverage highlighted the launch of Passkey Central, emphasizing its potential to accelerate passkey adoption and reduce reliance on passwords.
We extend a heartfelt thanks to all speakers, including Kieun Shin and Hyungchul Jung (Co-Vice Chairs of the FIDO Alliance Korea Working Group), Heungyeol Yeom (Emeritus Professor at Soonchunhyang University), Jaebeom Kim (TTA), Yuseok Han (AirCuve), Heejae Chang and Keiko Itakura (Okta), Junseo Oh (Ideatec), and Simon Trac Do (VinCSS) for their invaluable contributions.
We also express our gratitude to our sponsors, whose support made this year’s Seoul Public Seminar a resounding success.
Proudly Sponsored by:Comment Period Ends - January 27th
OASIS members and other interested parties,
OASIS and the XLIFF TC are pleased to announce that XLIFF v2.2 CSD02 is now available for public review and comment.
This is a 2 part specification specification which defines Version 2.2 of the XML Localisation Interchange File Format (XLIFF). The purpose of this vocabulary is to store localizable data and carry it from one step of the localization process to the other, while allowing interoperability between and among tools.
The documents and all related files are available here:
XLIFF Version 2.2 Part 1: Core
Committee Specification Draft 02
12 November 2024
Editable source:
https://docs.oasis-open.org/xliff/xliff-core/v2.2/csd02/xliff-core-v2.2-csd02-part1.xml (Authoritative)
HTML:
https://docs.oasis-open.org/xliff/xliff-core/v2.2/csd02/xliff-core-v2.2-csd02-part1.html
PDF:
https://docs.oasis-open.org/xliff/xliff-core/v2.2/csd02/xliff-core-v2.2-csd02-part1.pdf
XLIFF Version 2.2 Part 1: Extended
Committee Specification Draft 02
12 November 2024
Editable source:
https://docs.oasis-open.org/xliff/xliff-core/v2.2/csd02/xliff-extended-v2.2-csd02-part2.xml
(Authoritative)
HTML:
https://docs.oasis-open.org/xliff/xliff-core/v2.2/csd02/xliff-extended-v2.2-csd02-part2.html
PDF:
https://docs.oasis-open.org/xliff/xliff-core/v2.2/csd02/xliff-extended-v2.2-csd02-part2.pdf
For your convenience, OASIS provides a complete package of the specification document and any related files in a ZIP distribution file. You can download the ZIP file at:
https://docs.oasis-open.org/xliff/xliff-core/v2.2/csd02/xliff-core-v2.2-csd02.zip
How to Provide Feedback
OASIS and the XLIFF TC value your feedback. We solicit input from developers, users and others, whether OASIS members or not, for the sake of improving the interoperability and quality of its technical work.
The public review starts 20 December 2024 and ends 27 January 2025 at 23:59 UTC.
Comments may be submitted to the project by any person through the use of the project’s Comment Facility. Members of the TC should submit feedback directly to the TC’s members-only mailing list. All others should follow the instructions listed here.
All comments submitted to OASIS are subject to the OASIS Feedback License, which ensures that the feedback you provide carries the same obligations at least as the obligations of the TC members. In connection with this public review, we call your attention to the OASIS IPR Policy [1] applicable especially [2] to the work of this technical committee. All members of the TC should be familiar with this document, which may create obligations regarding the disclosure and availability of a member’s patent, copyright, trademark and license rights that read on an approved OASIS specification.
OASIS invites any persons who know of any such claims to disclose these if they may be essential to the implementation of the above specification, so that notice of them may be posted to the notice page for this TC’s work.
Additional information about the specification and the XLIFF TC can be found at the public home page here.
Additional references:
[1] https://www.oasis-open.org/policies-guidelines/ipr/
[2] https://www.oasis-open.org/committees/xliff/ipr.php
Intellectual Property Rights (IPR) Policy
The post XLIFF v2.2 CSD02 is now available for public review appeared first on OASIS Open.
The Digital Credentials Protocols (DCP) working group recommends approval of the following specification as an OpenID Implementer’s Draft:
OpenID for Verifiable Credential Issuance: https://openid.net/specs/openid-4-verifiable-credential-issuance-1_0-15.htmlAn Implementer’s Draft is a stable version of a specification providing intellectual property protections to implementers of the specification. This note starts the 45-day public review period for the specification draft in accordance with the OpenID Foundation IPR policies and procedures. Unless issues are identified during the review that the working group believes must be addressed by revising the draft, this review period will be followed by a seven-day voting period during which OpenID Foundation members will vote on whether to approve this draft as an OpenID Implementer’s Draft. For the convenience of members, voting will actually begin a week before the start of the official voting period, for members who have completed their reviews by then. This would be the second Implementer’s Draft of the specification.
The relevant dates are:
Implementer’s Draft public review period: Friday, December 20, 2024 to Sunday, February 2, 2025 (45 days) Implementer’s Draft vote announcement: Monday, January 20, 2025 Implementer’s Draft early voting opens: Monday, January 27, 2025 * Implementer’s Draft official voting period: Monday, February 3 to Tuesday, February 10, 2025* Note: Early voting before the start of the formal voting period will be allowed. The Digital Credentials Protocols (DCP) working group page is https://openid.net/wg/digital-credentials-protocols/. Information on joining the OpenID Foundation can be found at https://openid.net/foundation/members/registration. If you’re not a current OpenID Foundation member, please consider joining to participate in the approval vote. You can send feedback on the specification in a way that enables the working group to act upon it by (1) signing the Contribution Agreement at https://openid.net/intellectual-property/ to join the working group (at a minimum, please specify that you are joining the “DCP” working group or select “All Work Groups” on your Contribution Agreement), (2) joining the working group mailing list at openid-specs-digital-credentials-protocols@lists.openid.net, and (3) sending your feedback to the list.
-Marie Jordan, OpenID Foundation Secretary
About the OpenID FoundationThe OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.
The post Public Review Period for Proposed Second Implementer’s Draft of OpenID for Verifiable Credential Issuance Specification first appeared on OpenID Foundation.
Bias in biometric identity systems still exists, but it is manageable, argues Andrew Shikiar at the FIDO Alliance
When you unlock your smartphone, open your bank app, or approve a purchase on your laptop, you are using biometric authentication. It is such an unconscious part of our daily lives that if you blink, you might miss it.
It’s no wonder that biometrics are popular with consumers—they’re convenient and secure. Recent FIDO research found that consumers want to use biometrics to verify themselves online more, especially in sensitive use cases like financial services, where one out of two people said they would use biometric technology (48%). In fact, in the FIDO Aliance’s latest online barometer survey, consumers ranked biometrics as the most secure and preferred way to log in by consumers.
But for consumers, governments and other implementers, there is still a lingering ‘elephant in the room’ that continues to disrupt adoption: bias.
Should we worry about bias in biometrics?
FIDO Alliance’s research, Remote ID Verification – Bringing Confidence to Biometric Systems Consumer Insights 2024, found that consumers are concerned about bias in biometric facial verification systems– while the majority of consumers (56%) felt confident face biometrics systems could accurately identify individuals, a number still had concerns around discrimination present in some systems.
Concern surrounding the accuracy of biometric systems in processing diverse demographics has been developing in recent years. In the UK in 2021, for example, Uber drivers from diverse ethnic backgrounds took legal action over claims its software had illegally terminated their contracts as its software was unable to recognise them.
While the struggle of Uber drivers is just one example that underscores the issue, this problem is affecting people of colour and other underrepresented demographics more broadly—FIDO’s research found that one in four respondents feel they experience regular discrimination when using automated facial biometric systems (25%).
Feelings of discrimination and bias in facial recognition systems impact the entire user experience and erode faith in the technology overall. Half of British consumers in the survey said they would lose trust in a brand or institution if it were found to have a biassed biometric system, and 22% would stop using the service entirely.
It’s clear why organisations like governments and banks would worry about these hard-hitting reputational and trust risks. Despite biometrics being widely accepted as a more convenient and highly secure technology, the small number of systems that aren’t as accessible are leaving an air of concern that is slowing down more mainstream adoption.
Addressing bias in facial verification
The most important thing to note is that not all biometric systems are created equal. Currently, testing levels are done on a case-by-case basis for each organisation, which is both costly and time-consuming, with varying definitions of what “good” looks like.
Based on proven ISO standards and developed by a diverse, international panel of industry, government, and identity experts, FIDO Alliance’s new Face Verification Certification program brings the industry’s first independent certification to market to build trust around biometric systems’ performance.
The certification assesses a face verification system’s performance across different demographics, including skin tone, age, and gender, in addition to far more wide-reaching security and performance tests.
The intensive security and liveness testing also verify that a provider’s face verification system can accurately confirm identities are real and authenticating in real-time, keeping threats like identity theft and deepfakes at bay. This is especially important for the most common use cases of face verification, like creating secure accounts, authenticating users, recovering accounts, and resetting passwords.
The beauty of independent certification is it sends a clear signal to consumers, potential clients, and auditors that the technology has been independently tested and is ready for both commercial and government use. It’s about building trust and showing that the provider takes security and fairness seriously.
More broadly, certification and independent global testing spark innovation and boost technological adoption. Whether you’re launching an identity verification solution or integrating it into regulations, open standards and certification provide a clear performance benchmark. This streamlines efforts, boosts stakeholder confidence and ultimately enhances the performance of all solutions on the market.
The future of identity
As the way we verify digital identities keeps evolving and demand to prove who we are remotely increases, biometric systems must be independently verified and free from bias. All technologies rolled out to this scale need to be fair and reliable for everyone.
The FIDO Alliance’s program demonstrates solution providers are serious about making sure biometric identity verification technologies are trustworthy, secure, and inclusive for all users. It’s like having a gold star or a seal of approval that says, “Hey, you can trust this system to be fair and safe.”
Biometrics for online identity verification is not just a promising concept; it’s rapidly becoming a practical necessity in today’s increasingly digital world. They’re ready for implementation across various industries. With independent certification, organisations can jump over the final hurdle to widespread adoption, empowering a future of more seamless, digital and remote identity.
1. What is the mission and vision of Facephi?
Facephi’s mission is to create seamless, trustworthy digital identity experiences that prioritize security, privacy, and compliance. We enable businesses to transform by connecting users to the digital resources they need efficiently and safely—whether as employees, partners, or consumers. Through our advanced identity verification technology, we simplify and secure the access of people to essential digital assets and services, ensuring that organizations worldwide can thrive in a digital-first world.
Facephi envisions a future where secure digital identity is at the heart of every interaction, seamlessly linking people, applications, services, and data. We aspire to be the foundation that supports and protects each digital connection, enabling individuals and organizations alike to navigate a secure digital world with assurance and confidence. We believe in a future where every identity and every access point is safeguarded by robust, transparent digital identity infrastructure.
2. Why is trustworthy digital identity critical for existing and emerging markets?
In today’s world, digital identity is essential to secure and scalable digital engagement.
As more sectors—finance, healthcare, travel, and others—move toward online services, secure and trusted digital identity becomes critical.
Traditional perimeter-based security models no longer apply effectively, especially with the rise of cloud computing. For this reason, the “”Identity-First Security”” model has emerged as the most viable framework for protecting digital assets.
Our solutions help organizations transition to a robust, decentralized, and identity-centric security model. The convergence of secure authentication, data protection, and privacy compliance represents a necessary paradigm shift, particularly for emerging markets
where secure and equitable digital access can drive significant economic growth.
3. How will digital identity transform the Canadian and global economy? How does your organization address challenges associated with this transformation?
Digital identity is a foundational element that powers global digital commerce, enabling access to essential services securely and universally. With secure digital identity, both individuals and organizations can engage in borderless business from anywhere in the world. Facephi facilitates this transformation by enabling secure identity verification that supports secure digital access at scale. As we build identity ecosystems that are interoperable, user-centered, and privacy-respecting, we support the global economy’s digital shift by ensuring seamless and secure interactions across borders. This shift has the potential to unlock access to vital services, foster trust across regions, and create an inclusive digital economy.
Facephi addresses the complex challenges of digital identity through an adaptable, interoperable approach to identity management. Our technology supports multiple identity roles—Issuer, Holder (Wallet), and Verifier—alongside a Trust Registry, enabling us to provide secure identity solutions at every step. We are aligned with standards like mDOC (ISO 18013), W3C Verifiable Credentials, and SD-JWT, ensuring compatibility with global frameworks. By focusing on interoperability and secure frameworks, we help organizations establish the trust and scalability needed for broad digital identity adoption. Our approach encompasses both the issuance and verification of credentials, combining compliance with innovative solutions to ensure secure, accessible, and user-centered digital identity management.
4. What role does Canada have to play as a leader in this space?
Canada has an essential role to play as a leader in secure digital identity, supporting both regulatory frameworks and technological standards that foster trust and innovation. As Canadians increase their reliance on digital services, it is critical to ensure the security and integrity of digital identity systems. By establishing standards for trusted digital architecture, Canada can help shape a secure, transparent ecosystem that enables users to control access to their personal information with precision and confidence. Canada’s commitment to a trustworthy digital identity infrastructure will set an example globally and drive progress in secure, interoperable identity systems.
5. Why did your organization join the DIACC?
Facephi joined the DIACC to collaborate with leading organizations in advancing secure, user-friendly digital identity standards. We share DIACC’s commitment to building a trusted framework that empowers people, businesses, and governments to interact safely online. By participating in DIACC, we contribute to and benefit from a collaborative approach to developing a secure digital identity framework that respects user privacy, ensures interoperability, and promotes innovation.
6. What else should we know about your organization?
Facephi is a global leader in digital identity verification and authentication, providing technology that enables secure, user-friendly access across industries. Our platform supports a wide range of digital identity solutions, from secure identity verification and authentication to credential issuance and verification, adhering to standards like OID4VCI for credential issuance and OID4VP for credential presentation. Our solutions address interoperability, Trust Frameworks, and compliance with global digital identity standards, providing a robust foundation for organizations pursuing digital transformation. Through advanced technology and strategic partnerships, Facephi is shaping the future of secure digital identity and enabling seamless, trusted interactions in an increasingly digital world.
On December 5th, we held our very first Hyperledger Web3j Summit, bringing together a vibrant cross-section of the community—from seasoned contributors and enterprise users to new enthusiasts curious about integrating Ethereum functionality into their Java and Android applications. As a maintainer of Hyperledger Web3j, an LF Decentralized Trust project, I was excited to see participants share insights, raise important questions, and offer new ideas to drive the project forward.
Duplicate product listings, inaccurate information, counterfeit items, and regulatory challenges were among the obstacles Shopee Brazil faced during its rapid growth as a leading e-commerce platform.
To address these issues, Shopee adopted GS1 GTINs to uniquely identify products and integrated Verified by GS1 to authenticate data before listings are approved. This commitment to accuracy and trust is helping Shopee deliver a transparent and efficient shopping experience for both buyers and sellers.
Watch the podcast (Portuguese with subtitles)Modern technology makes starting an online business easy. However, that also means stiffer competition.
How can aspiring entrepreneurs succeed in the world of e-commerce? In this episode, Jesse Ness, at Ecwid by Lightspeed, joins hosts Reid Jackson and Liz Sertl to discuss the essential steps and common pitfalls of starting and growing an online business. They discuss ways high-quality imagery, detailed product descriptions, and social media engagement can help your store stand out. Jesse also shares insights on emerging market trends like live selling and community engagement.
In this episode, you’ll learn:
How storytelling helps brands stand out in a crowded e-commerce market
The first steps to setting up a successful online store
Tips to overcome growth plateaus and how to scale your business effectively
Jump into the conversation:
(00:00) Introducing Next Level Supply Chain
(01:43) Online selling with Ecwid
(04:02) How to set up an online store
(08:27) Share your brand story
(12:11) Why entrepreneurs give up too soon
(17:10) The rise of live selling and other e-commerce trends
(22:16) Jesse Ness’ favorite tech
(24:34) Using AI to enhance daily life
Connect with GS1 US:
Our website - www.gs1us.org
Connect with the guest:
Jesse Ness on LinkedIn
On 10th December 2024, Markus Sabadello, co-chair of the DIF Identifiers & Discovery WG, and Kyoungchul Park, chair of the DIF Korea SIG, delivered a guest lecture at Seoul's Gangnam MegaStudy academy, Korea's largest private educational institution.
The one-hour session covered core decentralized identity technologies including DIDs (Decentralized Identifiers), VCs (Verifiable Credentials), and digital wallets, along with DIF's activities and open-source projects like the Universal Resolver. Students demonstrated sophisticated understanding through their challenging questions about DID-Web3 relationships, VC proof mechanisms, and underlying trust models.
The MegaStudy IT Academy extended a generous welcome to DIF's Markus Sabadello, thanks to the efforts of the DIF Korea SIG and chair Kyoungchul Park
This session marks another milestone for the DIF Korea SIG since its launch in Busan in July 2023. Established alongside the Ministry of Science and ICT's Self-Sovereign Identity Technology Project, the SIG bridges DIF's technical standards with Korea's evolving digital identity ecosystem.
DIF's growing network of Special Interest Groups across Asia and Africa demonstrates our commitment to global collaboration while respecting local contexts. The Korea SIG exemplifies this approach through productive engagement between international standards and key national institutions. Learn more in our guest blog with Korea SIG Chair Kyoungchul Park.
Aspen Pharmacare Holdings Limited
Lorraine HillFIDO passkey adoption doubles in 2024 as major firms opt for passwordless log-in
Passkeys are a biometric security trend to watch in 2025. The FIDO Alliance themed its 11th annual FIDO Tokyo Seminar on how passkey adoption is accelerating, with presentations from Google, Sony Interactive Entertainment, Mastercard, and other organizations joining the journey to password-free living. Microsoft has confirmed its advice on how to make people love passkeys – as it sweeps aside a major vulnerability that exposed 400 million Outlook 365 users.
Major tech brands drive mainstreaming of passkey account log-ins
In 2024, Amazon made passkeys available to 100 percent of its users and has seen 175 million passkeys created for sign-in to amazon.com globally. Google says 800 million Google accounts now use passkeys, with more than 2.5 billion passkey sign-ins over the past two years and sign-in success rates improving by 30 percent. Sony adopted passkeys for the global Playstation gaming community and saw a 24 percent reduction in sign-in time on its web applications.
Hyatt, IBM, Target and TikTok are among firms that have added passkeys to their workforce authentication options. More credential management products offering passkey options means more flexibility for consumers.
Japan joins passkey party in private sector, academia
The Japanese market showed a notable turn toward passkeys, with Nikkei, Nulab and Tokyu Corporation among firms embracing passwordless authentication technology. Nikkei will deploy passkeys for Nikkei ID as early as February 2025. Tokyu Corporation says 45 percent of TOKYU ID users have passkeys. And Nulab announced a “dramatic improvement in passkey adoption.”
Academia is helping drive innovation, with teams from Keio University and Waseda University winning acknowledgement for their research and prototypes at a slew of hackathons and workshops.
And FIDO, of course, is there to offer support, now offering its Passkey Central website resource on passkey implementation in Japanese, so that Japanese companies can take better advantage of its introductory materials, implementation strategies, UX and design guidelines and detailed roll-out guides.
The FIDO Japan Working Group, which includes 66 of the FIDO Alliance’s member companies, is now in its 9th year of working to raise passkey awareness in the country.
San Ramon, CA, 16 December 2024 – Following the success of two recent community hackathons co-hosted with the California Department of Motor Vehicles (DMV), the OpenID Foundation, a global leader in open identity standards, is now inviting more companies, government agencies and non-profits to collaborate in a similar way and drive adoption of privacy-preserving identity solutions.
The recent hackathons engaged a broad spectrum of public and private sector participants in order to expand the adoption and use cases of California’s mobile Driver’s License (mDL), which has incorporated the OpenID for Verifiable Credentials family of specifications into its architecture.
The two successful events provided a unique platform for participants to develop 25 real-world use cases and receive feedback from a panel of expert judges, their peers, and a wide spectrum of observers. Teams were encouraged to address key aspects of their solutions, including viability, privacy, security, user experience, and social impact.
Crucially, by engaging verifiers—businesses and government entities that need to consume and verify identity information—the events were able to address the ‘Cold Start Problem’ faced by emerging identity ecosystems. This is the struggle to gain traction on all sides of the market.
The first event in Mountain View brought together more than 50 participants from 15 selected teams, representing leading private sector companies, such as US Bank, Ping Identity, Cisco, and Square, as well as social businesses like Entidad. They highlighted practical and privacy-focused use cases of mDLs, demonstrating the value of secure digital credentials in industries like financial services, retail, healthcare, and entertainment.
The second event in Sacramento focused on public sector use cases, with participation from local, state, national, and international government entities, including the U.S. Air Force, California Highway Patrol, and the City of Los Angeles. Teams explored diverse applications for mDLs, ranging from streamlined access to government services (like applying for government jobs and benefits) to secure ID verification for emergency response coordination and benefit applications.
Earlier this year, in a development that complements the OpenID Foundation’s work with the California DMV, the National Cybersecurity Center of Excellence (NCCoE) signed a collaborative research and development agreement (CRADA) on Digital Identities – Mobile Driving Licenses – with 19 technology providers and industry experts. This included the OpenID Foundation. The goal was to accelerate the adoption of digital identity standards and best practices. The first use case the NCCoE will focus on is electronic ‘Know Your Customer’ for financial services use cases, with the participation of leading banks, several US states, and other industry experts to further refine regulatory acceptance of digital identity technologies.
Gail Hodges, Executive Director at the OIDF, said: “It’s clear that mDLs are firmly on the digital identity agenda in the United States, but we are not yet at the tipping point where digital identity credentials are universally available to Americans or can be presented everywhere Americans would like to use them online and in person.
“The California mDL hackathons have played a meaningful role to help early adopters cross the chasm. Our active involvement in them underscores our commitment to fostering secure, interoperable, and privacy-preserving digital identity solutions that are scalable across the US and the world.
“We invite more individuals, companies, government agencies, and nonprofits to join us in shaping the future of digital identity. We encourage developers and verifiers to explore the OpenID for Verifiable Credentials specifications, which is referenced in ISO/IEC 18013-5 Mobile Driving License (mDL) Application and ISO/IEC 18013-7 Mobile Driving License (add on function), has been incorporated by the California DMV’s into its mDL architecture, and was selected by the European Commission as part of the EIDAS 2.0 Architecture Reference Framework.
“Community hackathons like these will be crucial to driving adoption of privacy-preserving identity solutions in order to ultimately build a user-centric identity ecosystem that benefits everyone.”
To learn more about how to participate in shaping the standards and solutions that will drive the next generation of digital identity, please visit the OpenID Foundation.
To learn more about the hackathons, the CA DMV is hosting a public briefing on January 10, 2025 from 12-1pm PST.
ENDS
For more information, please contact:
Serj Hallam E: serj.hallam@oidf.org
Elizabeth Garber E: elizabeth.garber@oidf.org
About the OpenID Foundation
The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.
The post OpenID foundation urges private and public sector organisations to collaborate and drive adoption of privacy-preserving identity solutions first appeared on OpenID Foundation.
The California Department of Motor Vehicles (DMV), in partnership with the OpenID Foundation, recently hosted two groundbreaking hackathons aimed at advancing digital identity through California’s mobile Driver’s License (mDL). With over 100 attendees at both events, they were more than simply technical exercises; they were pivotal moments in the evolution of digital identity, bringing together diverse stakeholders to explore real-world use cases and solve pressing challenges.
As a global leader in identity standards, the OpenID Foundation is proud to have co-hosted these hackathons, which not only showcased the potential of mDLs, but also set a precedent for future collaborations between public and private sectors and how we can ‘cross the chasm’ to achieve widespread adoption of digital identity credentials as a ‘public good.’
From Mountain View to Sacramento: Two hackathons that matteredThe first hackathon, held in Mountain View, brought together over 50 participants from leading private sector companies and start-ups. These teams explored practical and privacy-focused applications for mDLs, demonstrating their value in industries like financial services, retail, healthcare, and entertainment.
Several teams at this first event were recognised for their outstanding contributions, including OpenID members:
CISCO for ‘Most Promising Use Case’ on using mDL as a root digital identity to enable password-less authentication. Block for ‘Best Privacy and Security Design’ for their use case on leveraging mDL for point-of-sale age verification of age-restricted products sold by Square merchants. MATTR, together with Samsung, Treez and Nuvei, for ‘Best Presentation’ for their use case on mDL for identity and age verification in cannabis dispensary and delivery services.The second event in Sacramento shifted the focus to public sector use cases. With participation from an impressive range of government entities like the U.S. Air Force, the California Highway Patrol, and the City of Los Angeles, teams tackled diverse applications ranging from helping people apply for government jobs and streamlining access to government services, to emergency and disaster assistance to accelerate and simplify access for those displaced. The teams recognised for their outstanding contribution to this second hackathon, included:
California Office of Data and Innovation for ‘Best Overall Solution and Most User-Friendly Design’. California Governor’s Office of Emergency Services for ‘Best Value to the Residents’. California Highway Patrol with their hackathon partner Credence ID for ‘Most Viable Implementation’.Both events attracted over 50 observers, all critical stakeholders playing a key role in promoting secure, privacy-protecting, and interoperable digital identity solutions that make a tangible difference in people’s lives.
The success of both events was the result of meticulous planning and a targeted outreach strategy from both the CA DMV and the OpenID Foundation. By ensuring each team that participated represented a real-world challenge that a private entity or public agency is trying to tackle today, the hackathons delivered relevant, practical, and valuable outcomes.
Moving the digital identity ecosystem out of the ‘Cold Start Problem’One of the most significant challenges for emerging identity ecosystems is the ‘Cold Start Problem’—the difficulty of gaining traction on both sides of the market. The hackathons addressed this by actively engaging verifiers—businesses and government agencies that need to consume and verify identity information. Basic tasks like setting up California test credentials and pointing out standards and reference code, were part of the support extended to hackathon participants before the event.
By doing so, the events not only demonstrated the value of mDLs, but also created a virtuous cycle of adoption. Leading edge developers were given a safe space to innovate, knowing that they would have expert support and a route to live production, executives within their businesses could see real live demonstrations of the technology and how it could be used to deliver customer value, and observers from across the community could see the technology and the range of use cases.
A true focus on collaboration and engagementWhat set these hackathons apart was their focus on engagement. In addition to a series of presentations, the events featured live demonstrations and interactive sessions. Teams were provided dedicated time to interact with and learn from each other. Observers could engage directly with teams, explore use cases, and provide immediate feedback. This dynamic environment fostered meaningful connections and has spurred further collaborations.
A commitment to open standards and transparencyThe California DMV’s commitment to transparency and open standards was evident throughout these hackathons, and the OpenID Foundation’s involvement was instrumental in this regard.
By incorporating the ISO 18013-5, W3C Verifiable Credentials, and the OpenID for Verifiable Credentials (OIDC4VC) specifications, the DMV demonstrated its commitment to aligning its mDL infrastructure with global standards. The presence of so many community experts as technical support, judges, and peers enabled hackathon participants to benefit from critical feedback on their use cases, provide standards leaders with feedback on the specifications, and help observers appreciate the tangibility of the solutions.
This collaboration underscored the important role independent, non-profit standards bodies can play in government-led digital infrastructure projects.
Momentum and next stepsThe hackathons have proven to be catalysts for ongoing innovation. Many of the participating teams have continued to develop their solutions, while new stakeholders—both from within and outside California—have expressed interest in participating in future initiatives. For instance, the second hackathon led to follow-up discussions with the Mexican government about potential proof-of-concept projects. Furthermore, governments from the U.S., Europe, and Australia have expressed interest in hosting similar events in 2025.
The OpenID Foundation and the California DMV are now exploring ways to broaden access to the CA mDL test credentials for verifiers and other critical stakeholders. This will help accelerate the development of new use cases and drive wider acceptance and use of mDLs.
Moreover, the success of these hackathons highlights the need for more such events. They provided a unique platform for developers, user experience professionals, and business and product leaders to experiment with digital identity technologies in the run up to live deployments.
For anyone interested in learning more about the hackathons, the CA DMV is hosting a public briefing on January 10, 2025 from 12-1pm PST.
Join the movement and help shape the future of digital identityThe OpenID Foundation is now inviting companies, government agencies, and non-profits to collaborate and make a difference in advancing privacy-preserving identity solutions. Whether through hackathons, workshops, or other forms of engagement, there are myriad opportunities to contribute to this vital work and help build an identity ecosystem that is secure, user-friendly, and inclusive.
For more information on upcoming events and how to participate, visit the OpenID Foundation’s website or contact us directly.
About the OpenID FoundationThe OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.
The post Driving Innovation and Adoption: Reflections on the California mDL Hackathons first appeared on OpenID Foundation.
As Hyperledger Foundation laid out in the Helping a Community Grow by Pruning Inactive Projectspost, there is an important life cycle to well governed open source projects. Through the evolution of the market, Hyperledger Foundation and, now, LF Decentralized Trust has been the home to a growing ecosystem of blockchain, identity and related projects.
In this episode of the Trust Issues podcast, host David Puner sits down with Andrew Shikiar, the Executive Director and CEO of the FIDO Alliance, to discuss the critical issues surrounding password security and the innovative solutions being developed to address them. Andrew highlights the vulnerabilities of traditional passwords, their susceptibility to phishing and brute force attacks, and the significant advancements in passwordless authentication methods, particularly passkeys. He explains how passkeys, based on FIDO standards, utilize asymmetric public key cryptography to enhance security and reduce the risk of data breaches.
The conversation also covers the broader implications of strong, user-friendly authentication methods for consumers and organizations, as well as the collaborative efforts of major industry players to make the internet a safer place. Additionally, Andrew highlights the importance of identity security in the context of these advancements, emphasizing how robust authentication methods can protect personal and organizational data.
Tune in to learn about the future of authentication and the steps being taken to eliminate the reliance on passwords.
The Swiss Parliament has resolved all outstanding differences between the National Council and the Council of States regarding the Electronic Identity Act (BGEID), paving the way for a formal final vote scheduled for December 20, 2024.
The implementation of SWIYU, encompassing both the electronic identity and its underlying trust infrastructure, has a potential to establish an open, interoperable ecosystem for digital credentials. This framework can be a solid foundation for the secure exchange of authentic data, thus fostering trustworthiness across digital applications in public administration, the economy, and civil society. Key principles of SWIYU and the e-ID include privacy by design, data minimization, user-centricity, and a commitment to openness and collaboration. We, at DIDAS, expect SWIYU, when fully implemented, to serve as an important building block promoting confidence in the digital realm, boosting economic growth and digital inclusion.
The new Swiss electronic identity (e-ID) system takes a completely different approach compared to the model that was rejected by voters in 2021. Unlike the earlier proposal, which handed the responsibility for issuing and managing digital identities to private companies, the new system is entirely state-operated. This ensures that the government, as a public entity, is responsible for issuing e-IDs and maintaining the necessary infrastructure. This change directly addresses the privacy and security concerns raised previously, making societal control easier. The updated framework is designed around user empowerment, with privacy by design and data minimization as fundamental principles, ensuring transparency and building confidence in its use.
What’s truly transformative is the system’s decentralized architecture, drawing its inspiration from Self-Sovereign Identity (SSI) principles. This gives individuals control over their own digital identities, and ability to decide what information to share with 3rd parties, such as service providers. The design aligns with the “trust diamond” framework, which organizes four essential roles: the government as the issuer, individuals as the holders, service providers as the verifiers, and a governance framework that ensures everything operates within clear, enforceable and trusted rules. This structure creates a reliable and secure ecosystem for digital identity, addressing shortcomings of the previous E-ID vision resulting in a user-centric, privacy-preserving approach.
DIDAS is exceptionally proud to have made a number of key contributions to Switzerland’s efforts, ensuring the system reflects fundamental Swiss values such as federalism, direct democracy, self-determination, and autonomy. Since its inception in 2020, DIDAS has been a strong advocate for SSI principles, emphasizing user control over personal data and a need for a secure, privacy-preserving digital ecosystem. It has been an integral part of our vision that a digital trust ecosystem must safeguard privacy but also enable economic value creation.
Early Advocacy and Strategic Vision
In October 2020, DIDAS was established with the primary goal of positioning Switzerland as a leader in developing and implementing privacy-preserving technologies, services, and products related to digital identity and electronically verifiable data. This vision laid the groundwork for a digital trust ecosystem that emphasizes data sovereignty and identity management based on tight alignment with the SSI principles.
Early Advocacy for Self-Sovereign Identity Principles
In December 2021, DIDAS published an explainer on SSI, outlining its core principles and the association’s commitment to establishing a viable and thriving SSI ecosystem. The DIDAS initiative aimed from the start to educate stakeholders and promote the adoption of SSI principles and frameworks within Switzerland’s digital infrastructure, for a more privacy preserving and frictionless digital future.
Contributing to the Dialog around National e-ID Legislation
By October 2021, DIDAS had provided extensive commentary on Switzerland’s target vision for the e-ID system. The association advocated for an “ecosystem of digital proofs”, where the e-ID would serve as one credential among many, enabling both governmental and private entities to issue other types of credentials. This approach aimed to create a flexible and future-proof foundation for digital interactions in Switzerland.
In December 2021, following a public consultation, the Swiss Federal Council decided to orient the implementation of the future e-ID system based on Self-Sovereign Identity (SSI) principles. DIDAS welcomed this decision, recognizing it as a commitment to a decentralized solution architecture that prioritizes maximum privacy protection and positions the e-ID as a cornerstone of a broader ecosystem of digital credentials. “Ambition level 3” was anchoring the approach to build an ecosystem of(business-domain-) ecosystems in which, in addition to the E-ID, other verifiable credentials can be exchanged securely and reliably.
Promoting Technological Innovation
In its early stages, DIDAS members established an open sandbox environment to facilitate the development and testing of Self-Sovereign Identity (SSI) solutions. This sandbox provided a controlled setting where developers and organizations could experiment with SSI technologies, enabling the creation of interoperable and secure digital identity systems. By offering access to resources such as repositories and live demonstrations, DIDAS’s sandbox played a crucial role in iteratively advancing knowledge within Switzerland’s E-ID movement.
DIDAS has consistently emphasized the importance of advanced digital signature technologies to enhance the Swiss e-ID framework. Following DIDAS’ statement in response to the E-ID technology discussion paper, and its recommendation for Scenario “A” as a feasible technical starting point in February 2024, in March 2024, the association proposed to adopt the concept of dual signatures. The technology approach to bridging the gap between the well established, but less feature rich cryptography and the new, but less known, techniques. Supported by the US Department of Homeland Security, this technique involves attaching multiple digital signatures to a single payload, each offering distinct security or privacy features. This methodology enhances agility and robustness, accommodating various cryptographic standards and privacy needs without compromising data integrity.
Advocating for Economic Value Creation beyond the societal value of a self-sovereign E-ID
Beyond technological contributions, DIDAS has been a committed advocate for leveraging the e-ID programme by the Swiss Confederation, to establish a digital trust and authentic data exchange ecosystem that creates sustainable economic value. The Association further envisioned this future ecosystem removing friction in B2B and cross border processes, by enabling higher levels of assurance in automation, significantly reducing risk of fraud, simplifying the management of compliance as well as allowing for the proliferation of digital trust-based businesses and innovations. On the Basis of the DIDAS Sandbox, Members have been experimenting around common use cases to explore ecosystem value creation and are looking forward to support issuers, holders and verifiers as well as technology vendors, to further experiment with the confederation’s public beta infrastructure in early 2025.
In January 2024, during the World Economic Forum’s Annual Meeting in Davos, we collaborated with digitalswitzerland to co-organize the “Digital Trust” session at the digitalswitzerland Village. This event convened over 50 speakers and panelists, including industry leaders and policymakers, to discuss the critical role of digital trust in today’s interconnected world.
In September 2024, at the event organized by the State Secretariat of International Finance (SIF) and the Swiss Financial Innovation Desk (FIND) at the Swiss Embassy in Singapore, we had the privilege of moderating and contributing to discussions on digital trust, emphasizing the importance of verifiable data and trust frameworks in global financial ecosystems. Our insights have also been shaping a soon-to-be-published paper, where DIDAS explores key principles and practical strategies to advance digital trust.
Collaborative Efforts and Future Outlook
We strongly believe that DIDAS’s collaborative approach, engaging, as a non for profit, independent association, with government bodies, private sector stakeholders, and civil society, has been instrumental in shaping Switzerland’s digital identity efforts. The association’s commitment to a pragmatic, principle-based, iterative, and inclusive methodology has ensured that the SWIYU’s vision aligns with both national interests and international standards.
As Switzerland prepares the final approval of the e-ID legislation on December 20, 2024, the foundational work of DIDAS continues to be important. We have a lot of work ahead of us to support the adoption of the E-ID and its mechanisms of exchanging authentic data. We further see our role in helping to increase the fluency of business leaders and innovators in applying these mechanisms. We’ll use the combined expertise of our members and our energy to promote and further enhance the key aspects of Ambition Level 3 governance and cross-ecosystem interoperability. Continued experimentation and dialogare unavoidable, in order to uncover and realize business value of this emerging Trust Infrastructure.
We are also proud to co-organize DICE (Digital Identity unConference Europe) in collaboration with Trust Square and the Internet Identity Workshop (IIW) rooting in Mountain View, California. DICE first launched in 2023 and exceeded expectations with 160 expert participants contributing to dynamic discussions. The second DICE in 2024 was a milestone, opened by Federal Councilor Beat Jans, underscoring the importance of these participatory conferences and their contribution to the development of the E-ID Framework and the Swiss Trust Infrastructure. DICE fosters joint learning, evolves collective thinking, and accelerates the adoption of digital identity and verifiable data solutions. In 2025, two events are planned, further advancing open dialogue as a cornerstone for collaboration for authenticity and trust in the digital realm.
The association’s vision of a secure, adaptable, and authentic data ecosystem built on SSI principles underlines its dedication to a sustainable digital environment that favors privacy and security, while enabling significant economic value creation.
We look forward to continuing to create positive impact with all of our members, partners and other stakeholders.
Cordially, The DIDAS Board
Further Articles and details on contributions in the DIDAS Blog
(an english translation can be found at the end)
La Fondation Human Colossus contribue à la conférence publique du mercredi 28 janvier 2025 organisée par la Faculté de droit, des sciences criminelles et d’administration publique (FDCA) de l’Université de Lausanne (inscription requise).
À l'ère du numérique, la liberté de choix est profondément influencée par la manière dont les données sont collectées, partagées et utilisées. La liberté de choisir est un concept fondamental qui revêt une importance particulière avec l'avènement des technologies numériques. Cette liberté de choix est intimement liée à la notion de sphère privée. Avec Internet et autres réseaux nous sommes confrontés à un large éventail de choix dans tous les aspects de notre vie quotidienne. Que ce soit dans le domaine des achats en ligne, des réseaux sociaux, des services bancaires en ligne ou dans le domaine de la santé, nous sommes constamment sollicités pour prendre des décisions aussi bien au niveau personnel que professionnel. Avec les outils d’intelligence artificielle qui s'immiscent dans notre quotidien notre sphère privée est-elle encore suffisamment protégée pour garantir autodétermination informationnelle ?
En prenant l’exemple de la médecine personnalisée dans le contexte de la liberté de choix à l'ère numérique, il est clair que l'accès aux données personnelles de santé et leur contrôle sont cruciaux. La médecine personnalisée s'engage à fournir des diagnostics et traitements individualisés. Cela nécessite des outils technologiques pour gérer les informations de santé de manière proactive afin de garantir que ces données soient utilisées de manière éthique et sécurisée. La technologie numérique doit aussi être utilisée pour renforcer l'autonomie des patients en leur permettant de prendre des décisions éclairées sur leur santé, tout en contribuant à des avancées significatives dans la recherche médicale.
Basée sur nos travaux, la Fondation présentera ces concepts à travers les enjeux actuels en Suisse liés au projet E-ID d’identité numérique et à son impact sur l’écosystème de la santé.
Journée de la Protection des DonnéesFreedom of Choice in the digital age
The Human Colossus Foundation contributes to the public conference (in french) on Wednesday 28 January 2025 organised by the Faculty of Law, Criminology and Public Administration (FDCA) of the University of Lausanne (registration required).
In the digital age, freedom of choice is profoundly affected by the way data is collected, shared and used. This freedom of choice is closely linked to the notion of privacy.
With the Internet and other networks, we are faced with a wide range of choices in all aspects of our daily lives. Whether it's online shopping, social networking, online banking or healthcare, we are constantly being asked to make decisions at both a personal and professional level. With artificial intelligence tools making their way into our daily lives, is our privacy sphere still well protected to guarantees informational self-determination?
Taking the example of personalised medicine in the context of freedom of choice in the digital age, it becomes clear that access to and control of personal health data are crucial. Personalised medicine is committed to providing individuals with individualised diagnoses and treatments. This requires technological tools to proactively manage their health information to ensure that it is used ethically and securely. Digital technology must also be used to empower patients, enabling them to make informed decisions about their health, while contributing to significant advances in medical research.
Based on its work, the Foundation will present these concepts through the current issues in Switzerland linked to the E-ID digital identity project and its impact on the healthcare ecosystem.
The Human Colossus Foundation is a neutral but technology-savvy Geneva-based non-profit foundation under the surveillance of the Swiss federal authorities.
Subscribe to our newsletter
The OpenID Foundation is now offering a Digital Identity Round-Up, similar to that which OIX used to offer its members. If you have suggestions or sources that we need to add to our lists as we develop this round-up, please feel free to contact Elizabeth Garber.
Cyprus has launched a mobile digital ID app known as Digital Citizen Wallets are key here, with the app enabling citizens to hold digital documents such as biometric ID cards, driver’s licenses, and vehicle roadworthiness certificates on their mobile. They can also be digitally verified using QR codes. Ghana has introduced a biometric border control system at its Kotoka International Airport in Accra Biometrics are key here, with the eGates system using the biometric national ID card – the Ghana Card – which is read by biometric gates before a biometric match is performed to open a second set of gates. New Mexico has become the ninth US State to introduce mobile driver’s licenses, which can be loaded onto either an Apple or a Google Wallet Wallets are key here, with the mobile enabling New Mexico residents to carry their driver’s licenses on their mobile and use them at select TSA checkpoints across the country. It enables digital verification through the scanning of a QR code, after which encrypted data is transmitted via Bluetooth. Papua New Guinea has released its National Digital ID policy, which is open for public consultation According to ICT Minister Timothy Masiu, the policy establishes an official digital ID system known as SevisPass, with the key use case of bank account opening in order to promote financial inclusion. The Swiss Government has outlined plans for the technical implementation of its national digital ID, which will be held in a wallet known as Swiyu The first stage of implementation is set to be tested in Q1 2025, with source code of individual components available in open source. The second stage solution is planned to include more stringent privacy requirements to prevent tracing eIDs to an individual – with the government allocating USD$1.1 million for research to help develop this. Nigeria has issued a procurement notice for a systems integrator for its new NIMS 2.0 digital ID system, which will be underpinned by the open source MOSIP platform Biometrics are also part of this notice, which asks the SI to integrate MOSIP with ABIS solutions and biometric enrolment kits. Legacy data from Nigeria’s current identity infrastructure will also need to be migrated. Air Canada is launching a digital identification programme for travellers departing from Vancouver International Airport, with Montreal, Ottawa, Calgary, Toronto, Victoria, and Edmonton set to follow Biometrics are key here, with the service using facial recognition to verify travellers at the gate, meaning they do not need to show their physical boarding passes or government-issued IDs Law enforcement agencies in the UK have issued a tender notice for a live facial recognition (LFR) system worth up to £20 million Biometrics are key here, with the system set to compare live camera feeds against watchlists to locate persons of interest. Despite opposition from civil rights organisations and lawmakers, the UK Government continues to support police use of LFR as a means to combat crime On Friday 29th November, Brazil’s Pix digital payments system reached a new record of 239.9 million transactions in one day DPI is key here, with Brazil’s Central Bank noting how this scale demonstrates the role of Pix as a public digital infrastructure A Central Bank survey also finds that Pix is used by 76.4% of Brazilians, and is the most common form of payment for 46% of respondents Japan has discontinued the issuance of health insurance cards, replacing them with the My Number digital ID Adoption is key, with Japan using this strategy to try to drive adoption in the country, which so far has been low due to concerns about system glitches and privacy Papua New Guinea has announced plans to follow Australia in legislating age assurance for ‘certain social media platforms’ Safety is the primary driver here, with the government’s Digital Transformation Lead, Steven Matainaho, claiming that the move is to ‘protect children from harmful content’ due to the ‘concerning rise in fraud, illegal goods distribution, human trafficking, disinformation, and cyber harassment’ Adults will also need to use mandatory digital ID (known as SevisPass) for accessing ‘age-restricted content’ The four major mobile operators in France – Bouygues Telecom, Free, Orange, and SFR – have joined forces to improve digital identity verification for online businesses. Interoperability is key here, with the operators introducing two new APIs to unify specifications across mobile networks. These are based on the CAMARA standard, an open-source project developed by the Linux Foundation The UK Home Office is planning to conduct trials of remote and in-person biometric fingerprint collection using smartphones for foreign nationals applying to come to the UK However, there are concerns about the feasibility of this plan, given that biometric fingerprint data in passports is currently protected by Extended Access Control (EAC) and can only be read by authorities of EU member states. Meanwhile, there are security concerns that remote fingerprint captures are more susceptible AI-enabled fraud New research from the Cambridge Centre for Alternative Finance (CCAF) finds that 60 jurisdictions have implemented legislation or regulations related to open banking Competition within the financial services industry is the key driver of adoption in 44 of these jurisdictions. The report also finds regional variation in open banking approaches, with jurisdictions in Europe, Central Asia, the Middle East, and North Africa predominantly embracing a regulation-led approach, while jurisdictions in Sub-Saharan Africa and Asia Pacific usually favouring a market-led approachNew event notifications:
Biometric Update is running a webinar called ‘Navigating the Emerging APAC Market for Digital ID’ at 3.30pm AEDT/11.30pm EST on the 11th February 2025 The State of Open Con 2025 is set to take place on 4th and 5th February 2025 in London Future Identity Finance is coming up on the 19th March 2025 in London, UK About The OpenID Foundation (OIDF)The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.
The post Digital Identity News Round-Up: 13 Dec 2024 first appeared on OpenID Foundation.
Imagine connecting and configuring devices on an oil rig in the middle of the ocean with limited human intervention. That’s the reality of what can be achieved with the FIDO Alliance’s Device Onboarding (FDO) standard. This is an example of the applications that IoT pioneer ASRock Industrial is bringing to life.
The rapid proliferation of IoT devices and Edge computing across industries has brought with it unprecedented opportunities and challenges. By 2025, over 75 billion IoT devices are expected to be connected globally, increasing complexities in device management and widening the attack surface for malicious actors. Recent studies suggest nearly 57% of IoT devices are susceptible to medium or high-severity attacks.
Corporate OverviewASRock Industrial, a global leader in industrial systems and motherboards, has become one of the first vendors to provide FDO-enabled compute solutions for industrial applications. The company offers industrial PC systems, motherboards, edge computers, and other products for industries such as automation, robotics, entertainment, and security, as well as cutting-edge systems for smart cities, energy firms, pharmaceuticals, automotive and more to customers around the world. ASRock Industrial is leading the way in the industrial IoT industry with its FDO certified solutions that make device onboarding more efficient, less vulnerable, and more scalable.
On the Edge: The Challenges of Industrial IoT“FDO’s advanced security framework enables us to deliver unparalleled reliability and adaptability, empowering our clients to scale confidently in increasingly complex environments.” – Kenny Chang, Vice President of Product and Marketing Division, ASRock Industrial
ASRock Industrial’s customers, like many in the industry, face challenges when deploying IoT devices and edge computing solutions quickly and securely.
Security vulnerabilities: Traditional manual onboarding methods leave devices vulnerable to unauthorized access and data breaches. For example, a connected IoT device may still have the original manufacturer’s default password in place, which increases the risk of password-related device compromises. Manual processes also increase the risk of exposed, unmanaged devices on the network. In industries like energy and transportation, secure operations are vital to public safety and system reliability. Time and cost inefficiencies: Not only are manual processes time-consuming, hiring skilled installers is extremely expensive. When calculating the time and cost for a skilled engineer to manually onboard edge devices, it’s important to include not only the technical setup time but also the travel time to what potentially may be multiple sites. ASRock Industrial estimates that before FDO, users could spend up to $1,000 per device implementation*. With FDO the installation is not only much faster and more secure, but it is also a task that can often be handled by existing on-site staff. Complexity and scalability: Legacy onboarding approaches are complex to deploy and manage. This complexity is only further exacerbated by the remote and high-risk environments many industrial applications are in. Sending skilled engineers to these environments not only creates bottlenecks and slows scalability, it introduces safety risks that further amplify costs. Lack of interoperability: The IoT space is very fragmented, with multiple proprietary platforms and operating systems. Existing “zero-touch” solutions are restricted in compatibility, making it hard to support clients across different sectors. Creating an FDO SolutionTo solve these challenges, ASRock Industrial turned to FIDO Device Onboard (FDO), and in doing so has become one of the market’s earliest adopters of this compelling technology. ASRock Industrial has integrated FDO into its flagship iEP-5010G series, a robust edge controller built for demanding industrial applications and harsh environments. The iEP-5010G series can operate within a wide temperature range of -40 to 70 degrees and supports 6-36VDC power inputs, 4G LTE, 5G, Wi-Fi 6E, and Bluetooth, and offers the most flexible I/Os and expansion options, making it a fit for industrial automation, robotics, transportation and more.
The ASRock Industrial FDO solution has been designed with FDO’s advanced features in mind. It delivers end-to-end FDO onboarding capabilities, encompassing all critical FDO functions: manufacturer, owner and rendezvous server.
Rather than hard programming devices for each different operating system, the iEP-5010G series device controller can be deployed as one system without pre-installation of OS or additional programming. This simplifies manufacturing and provides a better customer experience with the flexibility to decide OS requirements later in the process.
The FDO standard and associated certification program ensure consistency and interoperability. Standardized onboarding means devices are consistently and correctly deployed every time, removing the risk of errors for ASRock Industrial’s customers. Most importantly, the open standards-based approach means it can work seamlessly with other partners in the industry and support players across the globe.
Results and ImpactWhile early implementation results are still being gathered, ASRock Industrial anticipates significant benefits for both the company and its customers.
One of ASRock Industrial’s earliest use cases lies in the smart city domain, where their FDO-enabled iEP-7020E series devices leverage FDO technology to automatically onboard hardware and software to connect electric vehicle (EV) charging points and related devices seamlessly. By enabling remote monitoring of charging stations across multiple locations, FDO has eliminated the need for engineers to visit sites physically. Its AI-driven analytics have dramatically enhanced operational efficiency, while remote surveillance has addressed key challenges such as charger hogging, vandalism, and unauthorized access. This capability ensures more efficient and timely incident management. As urban demands evolve, FDO serves as a robust foundation for scalable, secure deployments, delivering sustained benefits over time.
Looking AheadASRock Industrial’s investment in FDO puts us in a prime position to meet the rigorous demands of Industry 4.0 advancements and provide customers with security levels that protect against the expanding edge threat landscape. In 2024, ASRock Industrial became one of the first to achieve FDO certification, passing the FIDO Alliance’s rigorous independent testing processes. The results of this testing demonstrate that ASRock Industrial’s products fully meet the FDO specification, meaning partners and clients can trust the security, interoperability and FDO functionality of these solutions.
FDO certification also plays an important role in differentiating ASRock Industrial by making their products more marketable in that they are capable of meeting the needs of a growing number of RFPs that call out FDO. Additionally, it reduces the company’s need to spend time and effort in intensive vendor bake-offs, allowing ASRock Industrial to spend more time innovating its product lines and value-added services.
Read the Case Study“Deploying FDO has marked a pivotal shift for ASRock Industrial, establishing a new benchmark in secure, scalable onboarding for industrial edge AIoT solutions. This deployment cements ASRock Industrial’s leadership in industrial computing security and sets the stage for us to shape the future of Industry 4.0 with solutions that are both resilient and future-ready.”– Kenny Chang, Vice President of ASRock Industrial
Today’s the last day at work for our members, before we take our traditional three weeks off over the festive period. We published 32 posts on this blog in 2024 — about everything from strategic uses of ambiguity to how we’re using AI — but which were the most popular?
You can tangle yourself up in knots thinking too hard about end of year retrospectives. After all, anything published at the start of the year has almost 12 times more chance to be popular than something published last week, right? Also, we might have shared some posts on the socials more than others 🤔
That’s where a little light editorialisation comes in, meaning I can mention some of the work we did this year. For example, with the Digital Credentials Consortium on communications strategy, Friends of the Earth around AI Sustainability principles, and an evaluation project we carried out for Jobs for the Future and the International Rescue Committee around Verifiable Credentials for New Americans.
The rest of this post is divided into five thematic sections featuring three posts each:
Open Badges & Verifiable Credentials Community Building Open Recognition Systems Thinking Everything ElseI hope you enjoy (re-)reading them as much as we enjoyed writing them. Why not use that clap button and/or reshare your favourites on whatever social network you’re using this week? 😊
Open Badges & Verifiable Credentials Why Open Badges 3.0 Matters Examining the Roots A Compendium of Credentialing Community Building The Power of Community Knowledge Management Building a Minimum Viable Community of Practice (MVCoP) Finding your activist friends Open Recognition Open Recognition — A feminist practice for more equal workplaces Towards a manifesto for Open Recognition A Little Open Recognition Goes A Long Way Systems Thinking An Introduction to Systems Thinking Finding Unexamined Assumptions Through Systems Thinking and Ambiguity Pathways to Change Everything Else How to find the right mental models for your audience Staying on Track The Business Case for Working Openly and TransparentlyWhatever you’re doing over the holiday season, have a great one! We’ll be back in January with more insights at the intersection of learning, technology, and community.
Might we be able to help you in 2025? Why not get in touch?
🎁 Our most popular posts of 2024 was originally published in We Are Open Co-op on Medium, where people are continuing the conversation by highlighting and responding to this story.
The development of a robust ecosystem around the upcoming E-ID implementation in Switzerland represents an essential next step in building Trust Infrastructure. For us at DIDAS it has always been vital to emphasize that while the E-ID verifiable credential is very important, it is still just a building block, the first step towards a much larger ecosystem where a variety of verifiable credentials will be issued and exchanged on a daily basis.
Many of these credentials, as well as the processes depending on them, will be implemented in the private sector, the vision known as “Ambition Level 3”. This is where the real economic value will come from. Much like the road infrastructure is built by the state and then fuels the economy, so will the Trust Infrastructure serve as a critical foundation, a privacy preserving enabler, for all kinds of native digital processes in the near future.
The E-ID framework, including the underlying Trust Infrastructure, is primarily targeting personal identity and credentials. To realize the complete potential of the Credentials’ Ecosystem, however, the open topics around the organization identity – the identity of an organization itself as well as that of their representatives – must be addressed.
Unlike natural persons, legal entities require unique considerations for authentication, governance, and compliance. A well-designed solution for legal entity identities could unlock significant opportunities for global trade, regulatory compliance, and business interoperability.
To succeed, the chosen approach must be flexible, future-proof, and globally scalable. Not only from a technical standpoint but also from a governance perspective. A scalable solution must accommodate the widest variety of jurisdictions, regulatory requirements, and business use cases, ensuring it is both technically sound and broadly acceptable.
The Role of vLEI CredentialsOne of the most promising solutions in this space is the Verifiable Legal Entity Identifier (vLEI) credentials ecosystem. Pioneered by the Global Legal Entity Identifier Foundation (GLEIF), vLEI credentials aim to provide an extensible basis for an electronically verifiable and trustworthy way to identify legal entities and their representatives in the digital world. GLEIF, a globally recognized authority, oversees the issuance of Legal Entity Identifiers (LEIs) that are already widely used in the financial sector to improve transparency and reduce risk.
The vLEI system builds upon this foundation by leveraging cutting-edge technology to ensure verifiability and scalability. However, despite its many advantages, the underlying technical framework—centered around Key Event Receipt Infrastructure (KERI) and Authentic Chained Data Containers (ACDC)— has proven rather challenging to grasp.
Deep Dive into the vLEI and its technical foundationsTo make it easier to understand and appreciate the solution, we have undertaken a deep dive into both the governance and technical aspects of the vLEI ecosystem. Our goal is to provide an accurate insight into the key mechanisms and characteristics that make vLEI ecosystem a prime candidate to serve as the backbone for use-cases and applications that require a globally scalable legal entity identity framework.
First, we take a look at the governance, highlighting the key aspects as to why vLEI ecosystem can meet the diverse regulatory requirements of jurisdictions worldwide while accommodating the varied needs of businesses. Then, on the technical side, we are exploring the foundational KERI/ACDC technology, which promises enhanced security, efficiency, transparency and interoperability.
To learn with us, watch the recorded deep-dive session on vLEI and its technical underpinnings.
Stay tuned for more insights as we continue this journey toward shaping the future of legal entity identities in the digital era.
Together, let’s make the vision of a seamless, privacy-first digital ecosystem a reality. Stay connected and be part of the transformation!
As we continue to expand the reach and impact of LF Decentralized Trust globally, we are thrilled to announce our active participation in LF India—a new Linux Foundation initiative that embodies the spirit of collaboration, innovation, and community that defines our open source community. India has become a global epicenter for digital transformation, with a thriving developer ecosystem, forward-thinking government initiatives, and a remarkable commitment to open source development.
Momentum continues in Japan with notable passkey success stories and deployments from Nikkei, Tokyu, Google, Sony Interactive Entertainment, KDDI, LY Corporation, Mercari and NTT DOCOMO
TOKYO, December 12, 2024 – More than 15 billion online accounts can use passkeys for faster, safer sign-ins – more than double than this time last year. The momentum behind FIDO and passkeys is the focus of today’s 11th annual FIDO Tokyo Seminar, where hundreds gathered to learn about the latest developments in the global push to eliminate dependence on passwords. Presenters include those from Google, Sony Interactive Entertainment, Mastercard, Waseda University, the Institute of Information Security, KDDI, LY Corporation, Mercari and NTT DOCOMO.
Passkeys become more widely available for consumer and workforce applications – and companies are seeing the benefits
Passkeys provide phishing-resistant security with a simple user experience far superior to passwords and other phishable forms of authentication. Many consumer brands are reporting passkey success stories and business benefits; some notable new and recent announcements include:
Amazon made passkeys available to 100% of its users, including in Japan, this year and already has 175 million passkeys created for sign-in to amazon.com across geographies. Google recently reported that 800 million Google accounts now use passkeys, resulting in more than 2.5 billion passkey sign-ins over the past two years. Also, Google’s sign-in success rates have improved by 30% and sign-ins speeds have increased by 20% on average. Sony Interactive Entertainment, the company behind PlayStation, released passkeys as an alternative option to passwords for their global gaming community and observed a 24% reduction in sign-in time on its web applications for passkey users. Additionally, high conversion rates have been observed, with 88% of customers who are presented with the benefits of passkeys successfully completing enrollment.Adoption also grew in the workforce this year as more companies bolstered their authentication options with passkeys, including Hyatt, IBM, Target and TikTok.
Consumers gained flexibility and choice for passkey management this year, as more credential managers, such as Apple, Google, Microsoft, 1Password, Bitwarden, Dashlane and LastPass expanded their passkeys support cross-ecosystem, and the FIDO Alliance announced new draft specifications for users to securely move passkeys and all other credentials across providers.
Notable Momentum in Japan
Specifically in Japan, new passkeys deployments and success were announced from Nikkei Inc., Nulab Inc., and Tokyu Corporation:
Nikkei Inc. unveiled their plan to deploy passkeys for Nikkei ID, for the millions of Nikkei ID customers to begin their migration from passwords to passkeys. This will be launching in February 2025 or later. Nulab Inc. announced their dramatic improvement in passkey adoption for Nulab accounts based on the outcome of the Passkey Hackathon Tokyo this past November. Tokyu Corporation has reported that 45% of TOKYU ID users have passkeys, and sign-ins with passkeys are 12 times faster than a password plus an emailed OTP.Additionally, Nikkei Inc., Nulab Inc. and Tokyu Corporation all successfully demonstrated their passkey implementations at the Passkey Hackathon Tokyo, organized by Google and sponsored by FIDO Alliance, in June 2024. Companies receiving awards included Nulab and Tokyu, as well as two teams of students from Japanese universities:
Keio University team received the grand winner award for adopting passkeys combined with an IoT device – a smart door lock created by a 3D printer. Waseda University team received another FIDO award for their unique user authentication protocol and implementation combined with passkeys, verifiable credentials and zero-knowledge proofs.In addition to these two teams, a group at the Institute of Information Security (Yokohama, Japan) presented their research entitled “A Study on Notification Design to Encourage General Users to Use Passkeys” at a workshop organized by the Information Processing Society of Japan (IPSG) on December 4, 2024. These activities demonstrate how students in academia are embracing passkeys as an attractive option for life without passwords.
Organizations that have already deployed passkeys for more than a year shared new successes:
KDDI now has more than 13 million au ID customers now using FIDO and has seen a dramatic decrease (nearly 35%) in calls to its customer support center as a result. Managing FIDO adoption carefully for both subscribers and non-subscribers. LY Corporation property Yahoo! JAPAN ID now has 27 million active passkeys users. Approximately 50% of user authentication on smartphones is now passkeys. LY Corporation said that passkeys have a higher success rate over SMS OTP and achieve 2.6 times faster. Mercari has 7 million users enrolled in passkeys, and enforcing passkey login for synced passkeys enrolled users of Mercari. Notably, there have been zero phishing incidents at Mercoin, a Mercari subsidiary since March 9, 2023. NTT DOCOMO has increased its passkey enrollments and now passkeys are used for approximately 50% of authentication by account users. NTT DOCOMO notably reports significant decreases in successful phishing attempts and there have been no unrecognized payments at docomo Online Shop since September 23, 2022.To drive further adoption in Japan, the FIDO Alliance announced that Passkey Central, the website for consumer service providers to learn more about why and how to implement passkeys for simpler and more secure sign-ins, is now available in Japanese. Passkey Central provides visitors with actionable, data-driven content to discover, implement, and maintain passkeys for maximum benefits over time. The comprehensive resources on Passkey Central include:
Introduction to passkeys Business considerations and metrics Internal and external communication materials Implementation strategies & detailed roll-out guides UX & Design guidelines Troubleshooting And more implementation resources, such as glossary, Figma kits, and accessibility guidanceAlong with the many in Japan, there are 66 of the FIDO Alliance’s 300+ member companies actively taking part in the FIDO Japan Working Group (FJWG). The FJWG is now beginning its 9th year working together to spread awareness and adoption of FIDO in the region.
Consumers and workforce users are aware of, and want to use, passkeys
Passkeys are not only available across a wide array of services, but recent studies have shown that consumers and workforce users are aware of, and want to use, passkeys. Recent FIDO Alliance research shows that in the two years since passkeys were first made available, consumer awareness has risen by 50%, up from 39% in 2022 to now 57% in 2024. Consumers also report that when they adopt at least one passkey, 1 out of 4 enables passkeys whenever possible. A majority of consumers also believe passkeys are more secure (61%) and more convenient than passwords (58%). Since 2023, consumers from APAC reported passkey awareness has grown significantly more when compared to the global average and other countries in 2024. Consumers from China (80%), India (70%), Japan (62%), and Singapore (58%) reported significantly higher passkey adoption in the last year, with Australia (52%) and South Korea (44%) trending close to the overall average (59%).
Sources:
Online Authentication Barometer 2024: Consumer Trends & Attitudes on Authentication Methods.
https://fidoalliance.org/research-findings-consumer-trends-and-attitudes-towards-authentication-methods/
Consumer Password & Passkey Trends: World Password Day 2024.
https://fidoalliance.org/content-ebook-consumer-password-and-passkey-trends-wpd-2024/
About the FIDO Alliance
The FIDO (Fast IDentity Online) Alliance, www.fidoalliance.org, was formed in July 2012 to address the lack of interoperability among strong authentication technologies, and remedy the problems users face with creating and remembering multiple usernames and passwords. The FIDO Alliance is changing the nature of authentication with standards for simpler, stronger authentication that define an open, scalable, interoperable set of mechanisms that reduce reliance on passwords. FIDO Authentication is stronger, private, and easier to use when authenticating to online services.
Building the Future of Digital Privacy: How you can Contribute, Implement, and Advocate
The applied cryptography community is making significant strides in standardizing BBS signatures and their extensions - a crucial development for privacy-preserving digital credentials. This work represents a major step forward in enabling more private and secure digital interactions while maintaining the necessary balance between privacy and accountability.
What is BBS?BBS is a secure digital signature mechanism that proves information is authentic and unchanged, similar to how a notary validates physical documents. Unlike other digital signature mechanisms, BBS enables powerful privacy features while maintaining security. Its name comes from its creators – cryptographers Dan Boneh, Xavier Boyen, and Hovav Shacham. This combination of security and privacy makes it particularly well-suited for digital credential systems, where protecting both authenticity and user privacy is crucial.
Why BBS MattersBBS signatures provide a unique combination of privacy and practical utility that makes them especially valuable for digital credentials.
From a technical perspective, BBS stands out for:
Constant-size signatures regardless of the number of messages signed True unlinkability between different uses of the same credential The ability to sign and selectively reveal multiple messages within a single signatureThese technical properties translate into practical benefits for digital credentials:
Selective disclosure: Users can prove specific facts about their credentials (like their city of residence) without revealing other details (like their full address) Unlinkable disclosure: Each privacy-preserving use of a credential cannot be traced to other uses Anti-theft features: Credentials can be cryptographically bound to their owner while maintaining privacy Controlled recognition: Services can securely recognize returning users without enabling cross-service trackingTogether, these capabilities enable privacy-preserving digital credentials that are both secure and practical for real-world deployment - from government IDs to professional certifications to age verification systems.
BBS Standards LandscapeThe standardization of BBS involves several complementary efforts across standards bodies:
Core Technical SpecificationsBBS and its extensions are currently undergoing standardization within the Crypto Forum Research Group (CFRG) of the Internet Research Task Force (IRTF). (The IRTF is related to the IETF but focuses on longer term research related to the Internet.)
The Decentralized Identity Foundation hosts the development of this work in the Applied Crypto Working Group.
This work is currently represented by 3 specifications:
"BBS Signatures": BBS Signatures are a privacy-preserving way to sign digital credentials. They let you prove specific facts about your credentials (like your city of residence) without revealing other details (like your full address). Specification: The BBS Signature Scheme "Blind BBS": Blind BBS enables credential issuance where the issuer can cryptographically sign information without seeing its contents - useful for privacy-preserving identity binding. Specification: Blind BBS Signatures “BBS Pseudonyms”: BBS Pseudonyms are an anti-fraud mechanism to prevent digital credential cloning. They can be used by verifiers (like websites) to identify someone they've interacted with before, but in a way that cannot be correlated across different verifiers. Specification: BBS per Verifier LinkabilityStatus: The first item, BBS Signatures, is a mature working group document that completed an initial review by the CFRG panel. The other two – Blind BBS and BBS Pseudonyms – are on their way to adoption, and they could benefit from your support, as described below.
Implementation StandardsThe W3C Verifiable Credentials Working Group is building on this foundation by developing the Data Integrity BBS Cryptosuite specification. This work integrates BBS signatures into W3C Verifiable Credentials, provides comprehensive test suites, and ensures that implementations across different platforms will be interoperable and reliable.
Call to Action 1. Voice Your SupportUrgent: Deadline December 20th, 2024
The CFRG has opened an official adoption call for both the Blind BBS and BBS Pseudonyms specifications. This is a crucial moment for these privacy-enhancing technologies.
Update: these specifications have been accepted! Thank you for your support.
Your voice matters - if you care about privacy-preserving technologies, please participate in the vote and share your support.
2. Get InvolvedWant to dive deeper into this work? There are several ways to engage based on your interests:
Developers: Contribute to W3C test suites or implement any of the specifications to test them out Standards developers: Join the discussion at any of the above standards groups Cryptographers: Review and provide feedback on the specifications and join the technical discussions Enthusiasts / everyone: If you want to follow along with the progress, subscribe to DIF’s blog for updatesTo participate in DIF’s Applied Crypto Working Group, you can join DIF; contact us at membership@identity.founation if you have any questions.
NEWARK, NJ, December 10, 2024 –Edge has been awarded the prestigious National Science Foundation EArly-Concept Grant for Exploratory Research (EAGER) for the project, EAGER: Empowering the AI Research Community through Facilitation, Access, and Collaboration. This groundbreaking initiative enhances the role of Campus Champions, research computing facilitators at academic institutions, to expand access to advanced artificial intelligence (AI) resources, and foster collaborations that democratize AI research.
The project addresses significant barriers in AI research by connecting under-resourced and minority-serving institutions with critical tools and expertise. Through mentorship and opportunities to participate in key conferences such as the National AI Research Resource (NAIRR) Pilot annual meeting, the Practice and Experience in Advanced Research Computing (PEARC) conference, and the International Conference for High Performance Computing, Networking, Storage, and Analysis (SC), the initiative empowers Campus Champions to strengthen research capabilities at their institutions. Campus Champions will also participate in discipline-specific conferences, contribute training materials to NAIRR’s resource repository, and engage in partnerships with peer organizations like the Campus Research Computing Consortium (CaRCC) and the Minority Serving – Cyberinfrastructure Consortium (MS-CC).
Forough Ghahramani, Principal Investigator (PI) of the project and Vice President of Research and Innovation, Edge, comments, “The NSF EAGER award is a testament to the transformative power of collaboration and facilitation in advancing AI research. By equipping Campus Champions with the tools and opportunities to bridge resource gaps, this project fosters innovation and inclusivity across the research community.” Continues Ghahramani, “I am excited to work alongside co-PIs and the Campus Champion Leadership team to strengthen the network of Champions and promote access to AI resources for institutions nationwide, especially those historically underrepresented in research computing.”
“The NSF EAGER award is a testament to the transformative power of collaboration and facilitation in advancing AI research. By equipping Campus Champions with the tools and opportunities to bridge resource gaps, this project fosters innovation and inclusivity across the research community.”
— Dr. Forough GhahramaniPI and Co-Principal Investigators include:
Forough Ghahramani (PI), Ed.D., Vice President for Research and Innovation Marina Kraeva, Ph.D. Manager High Performance Computing, Iowa State University of Science and Technology Cynthia L. Burrows, Senior Research Facilitator, Research IT Services, University of California, San Diego Michael D. Weiner, Ph.D., Senior Research Scientist and Research Computing Facilitator, Georgia Tech Research CorpThe project aligns with the National Science Foundation’s mission to advance science, health, prosperity, and equity. By expanding the Campus Champions program, it supports a diverse community of researchers, helping them overcome barriers in AI research through access to national resources and targeted mentorship. Together, the team will work to expand opportunities for under-resourced and minority-serving institutions, fostering an equitable and inclusive research ecosystem.
The Edge proposal was submitted in response to the Dear Colleague Letter (DCL), NSF 24093, announcement about NSF’s interest in receiving EAGER proposals and supplemental funding requests for National Artificial Intelligence Research Resource (NAIRR) Demonstration Projects to highlight innovative use cases and technologies that make use of the NAIRR Pilot.
About the NAIRR Pilot
The National Artificial Intelligence Research Resource (NAIRR) Pilot, led by the Office of Advanced Cyberinfrastructure (OAC) of the National Science Foundation (NSF), has been launched as a proof of concept to demonstrate the value and potential impact of the NAIRR concept as described in the NAIRR Task Force Report.
The NAIRR Pilot aims to address researcher needs by increasing access to a diverse ensemble of AI-related infrastructure resources including computational capabilities, AI-ready datasets, pre-trained models, software systems and platforms. In addition to facilitating for researchers and educators, important aspects of the vision for the NAIRR Pilot includes reaching new and broad communities; fostering positive end user experiences; and building a NAIRR pilot user community. More information about the NAIRR Pilot can be found at the NAIRR Pilot NSF site and at nairrpilot.org.”
For full details about the NSF grant are available here. To learn more about Edge’s commitment to initiatives of this nature, visit https://njedge.net/research/resources-featured-research-reports/.
About Edge
Edge serves as a member-owned, nonprofit provider of high performance optical fiber networking and internetworking, Internet2, and a vast array of best-in-class technology solutions for cybersecurity, educational technologies, cloud computing, and professional managed services. Edge provides these solutions to colleges and universities, K-12 school districts, government entities, hospital networks and nonprofit business entities as part of a membership-based consortium. Edge’s membership spans the northeast, along with a growing list of EdgeMarket participants nationwide. Edge’s common good mission ensures success by empowering members for digital transformation with affordable, reliable and thought-leading purpose-built, advanced connectivity, technologies and services.
The post Edge Receives $295,536 National Science Foundation (NSF) Grant to Enhance the Role of Campus Champions appeared first on NJEdge Inc.
Energy Web is proud to announce the launch of its fully managed Worker Node offering, now available through the Energy Web Launchpad SaaS platform. This innovative solution provides organizations with a powerful, streamlined way to execute decentralized computation while bridging technical complexity with operational simplicity
What is the Worker Node?The Worker Node is an off-chain runner designed to execute custom logic using Node-RED flows. Its lifecycle and operational parameters are managed through Energy Web X (EWX) worker node pallet solutions and solution group definitions.
Each Worker Node is equipped with a dedicated Worker Account, which is seamlessly linked to an EWX Operator Account. This linkage enables the Worker Node to continuously monitor on-chain actions, ensuring responsive adjustments to Operator Account solution group subscriptions.
Revolutionary CapabilitiesThe Worker Node introduces a host of advanced features to support decentralized computation:
Atomic Decentralized Computation: Acts as the foundational unit for decentralized computation networks, driving DePIN (Decentralized Physical Infrastructure Networks) use cases. Lightweight and Blockchain-Controlled: Fully managed via blockchain actions for secure and efficient operations. Low-Code Simplicity: Powered by the Node-RED runner engine, enabling rapid deployment within a mature low-code environment. Flexible Hosting: Supports diverse hosting options to suit varying user requirements. Constantly Evolving: Regular updates based on feedback from early adopters ensure the Worker Node remains cutting-edge. Why Choose the Worker Node Launchpad Offering?The Launchpad’s fully managed Worker Node offering is the ideal choice for users seeking reliability and simplicity:
Eliminate the need to keep hardware, such as laptops, running 24/7. Access a reliable, server-based solution supported by a dedicated team to handle any issues. Transition seamlessly from the Marketplace Desktop App Worker Node to the managed SaaS alternative. A Glimpse into the FutureThe Energy Web ecosystem continues to grow, with exciting developments on the horizon, including a new Marketplace Web App to enhance modularity and flexibility. To celebrate the launch, Energy Web is offering 25 exclusive, one-month 100% discount codes for the Worker Node Managed Offering, valid until March 2025.
This exclusive trial empowers users to explore the Worker Node Launchpad Offering risk-free, with the option to continue or revert to the Marketplace app afterward — ensuring maximum flexibility.
Get Started TodayDiscover the transformative potential of the Worker Node through detailed documentation and resources designed to help users transition effortlessly between the Marketplace app and the Launchpad offering.
The next few months promise exciting updates from Energy Web. Stay tuned for more surprises as we continue to expand the boundaries of decentralized technology.
About Energy Web
Energy Web is a global technology company driving the energy transition by developing and deploying open-source decentralized technologies. Our solutions leverage blockchain to create innovative market mechanisms and decentralized applications, empowering energy companies, grid operators, and customers to take control of their energy futures.
Energy Web Unveils Fully Managed Worker Node on Launchpad was originally published in Energy Web on Medium, where people are continuing the conversation by highlighting and responding to this story.
The post The Impact of AI on Education appeared first on NJEdge Inc.
The post Interpreting ADA Title II Compliance: Implications for Institutions and How You can Prepare appeared first on NJEdge Inc.
We are thrilled to share that TODAY (and also tomorrow) we are showcasing shared signals interoperability in action at the Gartner Identity and Access Management (IAM) Summit, in Grapevine, Texas.
Following the significant levels of interest generated by the success of our interoperability session at the last Gartner IAM Summit in London earlier this year, Gartner invited us back for a follow up.
This afternoon, industry leaders, security professionals, architects and identity management experts will have the opportunity to witness how the shared signals open standards are reshaping modern security practices. For organizations striving to enhance security in an interconnected world, it promises to be a showcase of collaboration, innovation, and real-world application of these groundbreaking standards.
What to expectShared Signals Work Group implementers will conduct the Shared Signals Interop Demos: CAEP and RISC in Action sessions – three on Tuesday and three on Wednesday.
Here, the attendees will see real-world implementations from industry leading companies of CAEP and SSF in action, showcasing how open standards solve complex security challenges.
Implementers presenting at these interoperability sessions are:
AppOmni caep.dev Cisco Delinea Google IBM Jamf Okta Omnissa SailPoint Saviynt SGNL Thales WinMagicAttendees joining the sessions will:
learn about the latest developments in CAEP and SSF and its critical role in Zero Trust architectures; network with experts and engage with leading implementers from the above listed organizations; experience live demonstrations, and see interoperable implementations of SSF and CAEP in action; have the opportunity to talk one-on-one with participants of the interoperability event to explore use cases and integration strategies. Building on success and driving interoperabilityThe OpenID Foundation’s return to the highly anticipated global event follows an extremely successful session in March earlier this year when the Gartner Summit took place in London. It proved to be a pivotal moment for SSF, positioning SSF as a game-changing API that enhances the security and efficiency of identity systems.
The Shared Signals Working Group hosted a similar interoperability session and a breakout session together with industry leaders, including Okta, SailPoint, and Cisco, as well as startups like VeriClouds and SGNL. They demonstrated the interoperable implementations of the Shared Signals Framework (SSF) and its associated protocols – Continuous Access Evaluation Protocol (CAEP) which allows continuous assessment of user sessions and dynamic authorization decisions, and Risk Incident Sharing and Coordination (RISC), which facilitates sharing of account-level risk events among service providers.
These demonstrations not only highlighted the power of open standards in building secure, Zero Trust architectures, but showcased a strong endorsement of open standards by leading technology providers. See below the status of the interoperability going into today’s event. For more information and to schedule a meeting with interoperability participants, please contact us.
Interoperability Event Participant StatusBelow are all of the committed participants of the event and their interoperability status prior to the Gartner IAM event.
Implementation
Organization
Transmitter
Receiver
<name> <logo>
< (push) | (poll) | >
< (push) | (poll) | >
SGNL
(push)
(push)
SGNL
Saviynt
(poll)
AppOmni
(push)
SailPoint
Omnissa
(push)
Thales
(push)
IBM
(poll)
Cisco
(push)
(push)
(poll)
WinMagic
(push)
Okta
(push)
(push)
JAMF
(push)
Delinea
Notes:
A checkmark () in a cell means that the corresponding implementation has successfully demonstrated interoperability with at least one other implementation that participated in this interoperability event, in the specific role (Transmitter or Receiver). If a Transmitter or Receiver could only demonstrate one delivery method (across all other interoperability participants), then the checkmark will be followed by a qualifier, i.e. “push” or “poll” depending on the delivery method demonstrated. The checkmark without a qualifier (push or poll) indicates that the implementation successfully interoperated with at least one other participating implementation using the push method, and with at least one other participating implementation using the poll method.About The OpenID Foundation (OIDF)
The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.
The post Shared Signals Interoperability at Gartner IAM first appeared on OpenID Foundation.
2024 has been a breakthrough year for Bitcoin engineering, driven by the innovative toolsets provided by the Elastos SmartWeb ecosystem. The introduction of BeL2 (Bitcoin-Elastos Layer 2) has redefined what is possible for decentralized finance (DeFi) on Bitcoin. With a vision to make Bitcoin “smart” BeL2 provides a completely decentralized clearing network, enabling Bitcoin to engage with cross-chain smart contracts while remaining on its secure main network. Imagine if Bitcoin could talk with other blockchains, execute complex contracts, and unlock its dormant potential—this is BeL2’s transformative promise.
BeL2 and the New Bretton Woods VisionSince Bitcoin’s inception in 2009, it has grown to a $1.9 trillion market cap, cementing its role as the most secure and trusted cryptocurrency. However, its programmability and financial utility have remained limited compared to other blockchains. Solutions like wrapped Bitcoin (WBTC) have emerged but rely on centralized custodians to access smart contracts, undermining decentralization and sparking fierce debate over company ownership.
BeL2 disrupts this model by ensuring interoperability without transferring assets. Instead of moving Bitcoin across chains, BeL2 transmits messages, also known as proofs, which allow smart contracts on Turing-complete blockchains to verify and execute complex financial operations based on collateralisation on Bitcoin. This trustless model preserves Bitcoin’s integrity while enabling applications like loans, exchanges, and stablecoin issuance—laying the foundation for a Bitcoin-backed “New Bretton Woods” system. BeL2 integrates four key elements to realize its vision for Native Bitcoin DeFi:
Collateralization: Bitcoin is locked in non-custodial, native scripts on its mainnet, ensuring maximum security and decentralization for owners. Verification: Zero-Knowledge Proofs (ZKPs) generate verifiable cryptographic proofs for Bitcoin transactions, providing trustless verification for Layer 2 applications. Communication: The BTC Oracle bridges proofs from Bitcoin into Ethereum Virtual Machine (EVM) smart contracts, enabling cross-chain interactions. Execution: Decentralized Arbiter nodes facilitate time-based execution and dispute resolution, ensuring fairness and trust in financial transactions.Together, these components create a robust protocol that unlocks the full potential of Bitcoin for DeFi, providing developers with the ability to build smart contract applications which open up Bitcoin Finance while maintaining its security ethos.
ELA Arbiters: The Final Piece of the PuzzleThe Arbiter network is the final layer of BeL2’s V1 protocol, providing execution services for Bitcoin-backed transactions, resolving disputes, and maintaining trust through decentralized and collateralized mechanisms in return for fees. At the heart of this system lies ELA, a Bitcoin-secured BTCFi reserve asset fortified by merge mining. By leveraging Bitcoin’s immense hash power, ELA inherits uncompromised security without extra energy costs. Its fixed supply and transparent emission schedule make ELA an ideal collateral asset, anchoring BTCFi with Bitcoin-level trust. As the “queen” to Bitcoin’s “king,” ELA is used as collateral to Arbiters nodes on a network to provide a secure, reliable Native Bitcoin DeFi environment.
This month, on the 30th of December, BeL2 will be releasing the Beta version of its Arbiter system. Key points to first understand:
The Beta stage marks the release of a product to an initial group of community users for testing and feedback. For security purposes, BeL2 will implement a 3-phase rollout, beginning in December and concluding in April. Phase one, launching this month, introduces the Beta version. The BeL2 Arbiter Beta will impose a $100 maximum limit on ELA collateral deposits. Collateral can be provided in ELA or ELA BPOS NFTs. Initially, rewards will be issued exclusively in ELA. However, upcoming applications and the scaling of Arbiters over the next few months will introduce utility for BTC rewards. BTC Lending: A Flagship BeL2 Use CaseThe Arbiter network will first be worked into supporting BeL2 BTC lending demo, the revolutionary application developed by the team to support the validation of underlying infrastructure for Native Bitcoin DeFi:
Secure Lending: Borrowers collateralize BTC without transferring it off the mainnet. No Forced Liquidations: Fixed interest rates protect borrowers from short-term price volatility. Transparent Dispute Resolution: Arbiter nodes ensure fair outcomes for all parties involved. Criteria for Joining the Fully Rolled out Arbiter NetworkBeyond Beta, once the network has stabilized and all phases of the rollout are complete, users will be required to meet the following criteria to join the finalized Arbiter Network:
A Dedicated BTC Wallet: Required for secure custody and dispute resolution. Exit Flexibility: Arbiters can exit the network if no active arbitration commitments are pending, allowing them to manage their participation. Stake ELA or BPoS NFTs: A minimum stake of 1,000–5,000 ELA is recommended to ensure commitment and secure arbitration responsibilities. Define Term End Date: Arbiters must set a staking duration, with longer terms increasing their selection chances. Purpose of Staking: Staked assets act as collateral, guaranteeing impartiality and commitment in arbitration events. Set Your Fee Rate: Arbiters define an annual percentage during registration (e.g., 12%). Example Income: A 2-month arbitration task with 10,000 ELA staked at 12% annual interest would yield 200 ELA. Aligned Rewards: Fees ensure that Arbiters are compensated for their role in securing transactions. Event Monitoring: Promptly submit cryptographic signatures for arbitration events. No Judgment Needed: Arbiters verify predefined events without adjudicating disputes, simplifying the process. Manual Operations: Tasks can be performed via a web interface, though timeliness is critical to avoid penalties.Applications requiring arbitration must:
Register with the Network: Initially approved by administrators, transitioning to DAO governance over time. Log Transactions: dApps must log all transactions with Arbiter contracts at creation to ensure future arbitration is possible.Fees for arbitration end when:
An arbitration request is initiated. The transaction is closed or reaches its deadline. A Vision RealizedThe introduction of Arbiters completes BeL2’s foundational layer, enabling trustless, decentralized financial applications on Bitcoin. This 3-phase rollout marks a milestone in Bitcoin’s evolution from a store of value to a programmable asset that underpins a global, decentralized financial system. BeL2 is on track to redefine how Bitcoin interacts with the world, unlocking over $1 trillion in dormant value and empowering its community to embrace a future free from custodial risks and centralized limitations.
Join the BeL2 MovementAs an Arbiter Beta participant, you are not merely engaging with a network—you are actively shaping the future of decentralized finance with Elastos. This is a call to action for the community to support the network by setting up nodes, providing valuable feedback, and driving the BeL2 network toward a successful market launch in 2025. Detailed instructions on how to set up a node will be published before December 30th, supporting the launch of the Arbiter Beta network. Did you enjoy this article? To learn more, follow Infinity for the latest updates here!
E-ID participation meeting, Zollikofen 2024.12.06
Human Colossus Foundation’s Dynamic Data Economy perspectiveOn December 6, 2024, the Swiss Federal Council made decisions regarding the technical implementation of the Confederation's new electronic identity proof (e-ID) and the underlying operational infrastructure. A press release [1] describes a two-stage launch of the e-ID, with the first delivery planned for 2026. The first stage will introduce a technology used by the European Union. At the same time, work will continue to develop additional solutions that could be used in a second stage to meet even higher privacy protection requirements, in particular the requirement that the various uses of the e-ID not be traceable to an individual.
DVS4U: Integration des E-ID Ökosystems in kantonale und kommunale Systeme
On the same day, representatives of the Human Colossus Foundation attended the annual hybrid participation meeting of the project E-ID, which took place in the Federal Office of Information Technology, Systems and Telecommunication (FOITT) buildings in Zollikofen. The Human Colossus Foundation's (HCF) contribution is the technology for securing and styling the visualisation of the E-ID on mobile devices. E-ID pilots have already implemented HCF's Overlays Capture Architecture (OCA) [2] in different contexts (see for example canton Thurgau proof of concept [3]. The Foundation will continue to support the E-ID team on semantic harmonisation for a stylish but secured visualisation of digital proofs.
However, our goal at the Human Colossus Foundation is to promote and support E-ID projects beyond the semantic realm. We support E-ID and public service initiatives in Switzerland and abroad. Our Dynamic Data Economy (DDE) approach anticipates the future implementation of national infrastructure components, including distributed governance and decentralised authentication. These technologies go beyond the building of complex verifiable credential use cases. They enable an ecosystem approach, helping integrate many providers of digital proofs, further enabling diverse use cases for E-ID.
We welcome the two-stage approach of the project that confirms a go-live for 2026 while activating research and development for higher security and privacy.
References:
[1] Swiss Federal Council December 6 2024 press release https://www.admin.ch/gov/en/start/documentation/media-releases.msg-id-102922.html
[2] OCA website and specification https://oca.colossi.network/
[3] Canton Thurgau DVS4U: Integration des E-ID Ökosystem in kantonale and kommunale Systeme: https://github.com/e-id-admin/general/blob/main/meetings/20241206_E-ID-Partizipationsmeeting_DVS4U_DE.pdf
Other Informations
Swiss Digital Identity and Trust infrastructure blog posts
Public Beta is Open Source: https://www.eid.admin.ch/en/public-beta-ist-open-source-e
SWIYU – Notes on the design and name of the e-ID and trust infrastructure: https://www.eid.admin.ch/en/swiyu-e
Project E-ID Git-Hub: https://github.com/e-id-admin
The Human Colossus Foundation is a neutral but technology-savvy Geneva-based non-profit foundation under the surveillance of the Swiss federal authorities.
Subscribe to our newsletterThe OpenID Foundation’s FAPI Working Group recommends approval of the following specifications as OpenID Final Specifications.
FAPI 2.0 Security Profile Other formats: XML, MD FAPI 2.0 Attacker Model Other formats: XML, MDA Final Specification provides intellectual property protections to implementers of the specification and is not subject to further revision. This note starts the 60-day public review period for the specification draft in accordance with the OpenID Foundation IPR policies and procedures. Unless issues are identified during the review that the working group believes must be addressed by revising the draft, this review period will be followed by a seven-day voting period during which OpenID Foundation members will vote on whether to approve this draft as an OpenID Final Specification. For the convenience of members, voting will actually begin a week before the start of the official voting period for members who have completed their reviews by then.
The relevant dates are:
Final Specification public review period: Monday, December 9, 2024 to Friday, February 7, 2025 (60 days) Final Specification vote announcement: Saturday, January 25, 2025 Final Specification early voting opens: Saturday, February 1, 2025 * Final Specification voting period: Saturday, February 8, 2024 to Saturday, February 15, 2025 (7 days)** Note: Early voting before the start of the formal voting will be allowed.
The FAPI Working Group page is https://openid.net/wg/fapi/. Information on joining the OpenID Foundation can be found at https://openid.net/foundation/members/registration. If you’re not a current OpenID Foundation member, please consider joining to participate in the approval vote.
You can send feedback on the specifications in a way that enables the working group to act upon it by (1) signing the OIDF Contribution Agreement at https://openid.net/intellectual-property/ to join the work group, (2) joining the work group mailing list at openid-specs-fapi@lists.openid.net, and (3) sending your feedback to the list.
Marie Jordan – OpenID Foundation Board Secretary
About The OpenID Foundation (OIDF)
The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.
The post Public Review Period for Proposed Final FAPI 2.0 Security Profile and Attacker Model Specifications first appeared on OpenID Foundation.
Despite serialisation being mandated in approximately 52 countries, many of the countries most at risk remain unregulated. In the absence of regulation, pharmaceuticals may be more vulnerable to product counterfeiting, diversion or adulteration.
Business goal GS1 Healthcare Case Studies 2023-2024 gs1-healthcare_cases_studies_2024_final_.pdfIn an increasingly interconnected world, digital processes must function seamlessly without interruptions. However, one element is indispensable for this: trust in digital data. Without trust, we face uncertainty, additional validation efforts, and delays in business processes.
How can we ensure data integrity, prevent manipulation, and efficiently meet regulatory requirements? And how can we achieve a state where information flows seamlessly between organizations without the need to question every document?
This is precisely the focus of our survey. By participating, you will help us better understand challenges and needs. Every perspective matters: whether you’re in leadership, IT, legal, or finance, your experiences and insights will directly contribute to making digital processes safer, more trustworthy, and easier to implement.
How can you help? Participate: Take a few minutes to complete our survey. Spread the word: Share the survey within your network. Every additional voice gives the results greater weight and impact.Together, we can make digital data so trustworthy that it forms the foundation for seamless, efficient, and secure business processes.
Join now: [Participate here!]Thank you for your support!
Contacts:
Dr. Roman Zoun is a board member of DIDAS and leads the Adoption Working Group. Professionally, he is with Swisscom, where he is responsible for the Digital Wallet division, focusing on the promotion of Self-Sovereign Identity (SSI) and digital trust infrastructures. As an expert in digital identities, data protection, and IT security, he brings extensive experience in identity and access management.In addition to his role at Swisscom, he serves as a board member of the OpenWallet Foundation, contributing to the development of secure and interoperable digital wallets. Dr. Zoun earned his Ph.D. in Computer Science from the Otto von Guericke University Magdeburg, with a strong academic background in cloud computing and mass spectrometry. His passion for innovation and commitment to trustworthy digital solutions make him a key figure in the field.
Jan Carlos Janke teaches and conducts research as a senior academic staff member on topics related to Digital Business Innovation, focusing on Blockchain, SSI, Digital Identities, Digital Trust, IT Management, and AI. As the Community Manager of DIDAS and Co-Lead of the Digital Identities Short Course and CAS Blockchain, he contributes to education in digital identity, data sovereignty, and blockchain technologies.He holds dual master’s degrees in Management and Finance from the European Business School in Wiesbaden and the EADA Business School in Barcelona. With nearly ten years of experience in the German financial sector, Janke was previously Head of Business Development at the Frankfurt School Blockchain Center under Philipp Sandner before joining HSLU.
More to Read? As we step into an era defined by digital transformation, DIDAS is at the forefront, championing the adoption of Self-Sovereign Identity (SSI) and trust infrastructures. Our vision is clear: a Switzerland with more privacy and less friction in the digital realm. Here’s how we’re working to make this vision a reality. Why We Do What We Do At DIDAS, our purpose drives everything we do. We are committed to: Educating, identifying, creating, and improving SSI use cases in Switzerland. Supporting both companies and individuals with an open, customer-centric approach. Building an ecosystem that fosters understanding, innovation, and optimization of Self-Sovereign Identity. Our goal is to tackle the pain points of digital interactions by creating solutions that: Enhance efficiency and effectiveness in digital systems. Establish a common language for trust infrastructures. Improve user experience and reduce friction. Ecosystem Building: A Multi-Dimensional Approach (Quelle: Grivas, HSLU 2024 Master Business IT - Digital Ecosystems) DIDAS leverages the diversity of ecosystems to drive adoption and innovation: Open Ecosystems Logic: Diversity of partners for a broad knowledge base. Example: Crypto Valley, Impact Hub, Cardossier. Goal: Knowledge exchange between partners. Controlled Ecosystems Logic: Joint alignment of a few partners under an orchestrator. Example: Helvetia Eco-System HOME, Twint. Goal: Deliver superior value propositions through aligned collaboration. Platform Ecosystems Logic: Harnessing network effects with interchangeable partners. Example: Amazon, AppStore. Goal: Create superior value propositions through network effects. By fostering these ecosystems, DIDAS ensures a balance between openness, control, and platform-driven innovation. Adoption: A Long Journey with High Rewards (Source: Zoun, DIDAS 2024, Adoption Working Group Presentation) The adoption of trust infrastructures is a gradual process but one that promises immense benefits: Authentic data enables better efficiency and liability management. Privacy-friendly solutions redefine the user experience, minimizing friction. Stakeholders gain a competitive edge through participation in cutting-edge digital ecosystems. Together, let’s make the vision of a seamless, privacy-first digital ecosystem a reality. Stay connected and be part of the transformation. Learn more at didas.swiss.For some time, Elastos faced hurdles related to liquidity and accessibility. Relying on centralized exchanges went against the principles of decentralization, and earlier cross-chain bridging and DEX efforts involved high-fee bridges and complex swap mechanisms, which often fell short, frustrating the community and hindering growth. The initial partnership with Chainge Finance marked a crucial turning point, laying the groundwork for a more streamlined and user-friendly DeFi experience. So, let’s jump in and go through the latest developments!
Liquidity Boost and New Chain IntegrationsRecently, Sasha Mitchell’s proposals, #167 & #168, which were in response to the integration of Chainge and aimed at raising Elastos’ liquidity on the Chainge platform, were successfully passed. This led to the addition of 197,152 USDC matched with 80,000 ELA, significantly boosting liquidity and reducing slippage. This is a key advancement for Elastos, which aims to offer a smoother and more predictable trading experience for its users.
Solana x Elastos Cross-Chain Swaps LaunchSolana has been added! This addition expands the options for direct swaps and trades with ELA on the Elastos Smart Chain (ESC), immediately tapping into the provided liquidity. This joins the long list of interconnected blockchain networks that can trade ELA with one click. These include Fusion, Ethereum, BNB Chain, Avalanche C, Polygon, Aurora, CoreDAO, Syscoin NEVM, Arbitrum, Optimism, Base, Linea, Polygon zkEVM, opBNB, Syscoin Rollux, Tron, Koinos, Merlin Mainnet, and X Layer Mainnet. This not only gives users more choices but also has the potential to offer better pricing and enhance the overall ecosystem experience.
Fiat On/Off Ramp LaunchFinally, Chainge Finance have enabled a fiat on/off ramp for ELA and the Elastos Smart Chain. Users can now purchase ELA directly using their bank accounts, bypassing the often clunky process of acquiring cryptocurrency via traditional exchanges. This feature represents a major advancement, allowing participants to move easily between traditional finance and the decentralized world of Elastos. The ability to trade ELA with fiat using bank accounts provides a simple entry point into the Elastos ecosystem, serving both crypto-native users and those entering the space from traditional financial backgrounds.
The added liquidity and the fiat on/off ramp positions Elastos as a more accessible SmartWeb. Whether you’re a long-time supporter or new to Elastos, these changes mean better access, reduced costs, and a more dependable experience overall. Explore the new features on Chainge Finance today to experience these updates firsthand. Engage in community discussions, stay informed about future developments, and help shape the future of Elastos. Visit the Chainge web dapp or download the mobile app (android / IOS) to start interacting with these new features now! Did you enjoy this article? To learn more, follow Infinity for the latest updates here!
This is to announce the 2024 OpenID Foundation Community Representatives election schedule. Those elected will help guide the Foundation’s efforts in facilitating the development and adoption of important open identity standards enabling global interoperability as well as the strategic direction of the Foundation.
Per the OIDF Bylaws and as of December 1, 2023, there are four Community Representative seats. George Fletcher’s and Mike Jones’s two-year terms have one year remaining. Nat Sakimura’s and John Bradley’s two-year terms are coming to end so that 2024 election has two Community Representative seats available. I want to thank Nat and John for their ongoing service and contributions to OIDF and the community at large noting that both are eligible to run again in the 2025 election.
Election ScheduleThe Community Representative’s election schedule is as follows:
Nominations open: Monday, December 9, 2024 Nominations close: Friday, December 27, 2024 Election begins: Monday, December 30, 2024 Election ends and results announced: Monday, January 13, 2025 New board term starts: Thursday, January 16, 2025 (annual OIDF board meeting)All members of the OpenID Foundation in good standing are eligible to nominate themselves, second the nominations of others including those who self-nominated and vote for candidates. If you’re not already a member of the OpenID Foundation, we encourage you to join now at https://openid.net/foundation/benefits-members/.
Voting and nominations are conducted on the OpenID Foundation web site: https://openid.net/foundation/members/elections/61
You will need to log in at https://openid.net/foundation/members/ to participate in nominations and voting. If you experience problems participating in the election or joining the Foundation, please send an email to help@oidf.org.
Board ResponsibilitiesBoard participation requires a substantial investment of time and energy. It is a volunteer effort that should not be undertaken lightly. Should you be elected, expect to be called upon to serve both on the board and on its committees. If you’re committed to the Foundation’s Vision and Mission and collaborate well with others, we encourage your candidacy.
You are encouraged to publicly address these questions in your candidate statement:
What are the key opportunities you see for the OpenID Foundation in 2025? How will you demonstrate your commitment in terms of resources, focus and leadership? What would you like to see the Foundation accomplish in 2025; how do you personally plan to contribute? What other resources do you bring to the Foundation to help the Foundation attain its goals? What current or past experiences, skills, or interests will inform your contributions and views?Please forward questions, comments and suggestions to me at director@oidf.org.
Best regards,
Gail Hodges
Executive Director
The OpenID Foundation
The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.
The post Announcing the 2025 OpenID Foundation Community Representatives Election first appeared on OpenID Foundation.
DIF Labs accelerates the DeID market innovation
DIF Labs: Decentralized Identity Innovation Group DIF Labs: Decentralized Identity Innovation Group Table of Contents 1. DIF Labs accelerates Decentralized Identity market innovation 2. Why DIF Labs? 3. The Beta (β) Cohort: Iterative Process Development 4. Project: Bitcoin Ordinals Verifiable Credentials Framework 5. Project: Linked Claims 6. Project: VerAnon 7. Timeline 8. Meet the Mentors and Chairs 8.1. Expert Mentors: 8.2. Chairs 9. What’s Next And Getting Involved? ,------. ,--.,------. ,--. ,--. | .-. \ | || .---' | | ,--,--.| |-. ,---. | | \ :| || `--, | | ' ,-. || .-. '( .-' | '--' /| || |` | '--.\ '-' || `-' |.-' `) `-------' `--'`--' `-----' `--`--' `---' `----' 1. DIF Labs accelerates Decentralized Identity market innovationDIF’s commitment to supporting builders and doers in the decentralized identity space has led to the launch of DIF Labs, an initiative aimed at transforming how decentralized identity solutions are built, tested, and scaled. DIF Labs bridges the gap between brilliant ideas and market-ready solutions, empowering developers, startups, and industry innovators to collaborate, experiment, and grow.
2. Why DIF Labs?
DIF Labs was born from the need for a dedicated space where creators in decentralized identity can focus on practical innovation. Traditional standards organizations can be slow and formal, while incubators may lack the technical ecosystem DIF provides. DIF Labs is uniquely positioned to address these gaps, offering participants:
Speed and Flexibility: Projects in DIF Labs aim to achieve milestones within months, not years. Open and Inclusive Governance: All outputs are open-source and royalty-free, allowing for broad community adoption. Knowledge Pool: Access to a network of expertise across technical, legal, and market domains.DIF Labs stands apart by supporting real-world applications rather than just standard development. It’s a space where builders can focus on creating solutions without the bureaucratic overhead typical of many organizations. Projects are any decentralized identity stack, not necessarily based on DIF-based specifications.
3. The Beta (β) Cohort: Iterative Process DevelopmentThe inaugural DIF Labs Beta Cohort began November 2024, and runs through February 2025. This first iteration is a learning process for everyone involved, with high-touch engagement from the chairs to refine the structure and maximize impact. The chairs hand-selected projects with leads they knew were capable and focused on completing some amazing work.
4. Project: Bitcoin Ordinals Verifiable Credentials FrameworkLead: Brian Richter Goal: Establish a standardized framework for implementing verifiable credentials on Bitcoin using Ordinal inscriptions. Use Cases: Event tickets, digital collectibles, ownership records, Verifiable AI Inference credentials. Proposal: Read the project proposal here. 5. Project: Linked Claims
Leads: Golda Velez, Agnes Koinange, Phil Long Goal: Build progressive trust through combining attestations and release the high-level specification for LinkedClaims to further the LinkedTrust project. Proposal: Read the proposal here. 6. Project: VerAnon
Lead: Alex Hache Goal: Introduce a protocol for anonymous personhood verification using Semaphore, a zero-knowledge group membership protocol. Proposal: Read the proposal here. 7. Timeline Labs Start Date: Late November 2024 Check-ins: Scheduled throughout the three months. Show and Tell: February 18, 2025 - Projects will be showcased to the community. 8. Meet the Mentors and Chairs 8.1. Expert Mentors: Ana Goessens: CEO at Animo Solutions. Makes decentralized identity easy. CEO at Animo.id, all-round company/team building in decentralized ID space. Anthony Fu: Open Sourcer. Core contributor to Vite/NextJS etc., how to build open-source projects. Daniel Buchner: TBD/Block, Decentralized Web Nodes. Jelle Millenaar: Co-Founder and CEO Impierce Technologies, DID evangelist. Markus Sabadello: Danube Tech, Decentralized Identity Foundation, Sovrin Foundation. Core contributor to Universal Resolver and Universal Registrar, go-to expert for DID Resolution spec. Matthew McKinney: Full stack marketing leader | GTM | Partnerships | startup advisor | B2B/B2C | AI + Blockchain | Head of Growth @ ArcBlock Nik Graf: Local-first software expert and Founder at Serenity. Otto Mora: ZK Identity Maxi @ Privado.ID (formerly Polygon ID). Rizel Scarlett: Staff Developer Advocate at Block | Artificial Intelligence & Open Source. Rod Boothby: Digital Identity Leader | VP Product | 2X Co-Founder CEO, COO $150M Incremental Revenue | Ex Wells Fargo, AIG, EY, Santander Grew npm Inc to #1 Javascript Repo with 20M Developer users. Steve McCown: Chief Architect, Applied Crypto R&D, CISM, CDPSE, Patented Inventor. 8.2. Chairs
DIF Labs is led by the following chairs, who lead DIF Labs with a focus on strategic direction and ensuring a collaborative environment.
Andor Kesselman: CTO Andor Labs, Co-Chair of the Technical Steering Committee at DIF, Technical Workstream Lead at the Global Acceptance Network. Ankur Banerjee: CTO/Co-Founder at Creds.xyz & cheqd, Co-Chair of the Technical Steering Committee at DIF. Daniel Thompson-Yvetot: Cofounder of Tauri Apps, CEO of CrabNebula, Author of “Manufacturing European Software”, DIF Labs Co-chair, European Regulatory Expert, Public Speaker, Coach. 9. What’s Next And Getting Involved?DIF Labs is more than a program—it’s a movement to propel decentralized identity into its next phase of growth. By fostering a space for practical, collaborative innovation, DIF Labs aims to cultivate solutions that tackle pressing industry driven challenges in the decentralized identity space.
We invite you to follow our journey and participate in this initiative:
Join the conversation on DIF Labs Discord Explore projects on Github Attend the meetings: Meetings are the third Tuesday of every month at 11am ET Subscribe to DIF’s calendar Stay tuned for updates and opportunities to get involvedYou’ll be hearing more about our Mentors, Chairs, and Cohorts in the coming months. Stay tuned as we unveil even more Mentors as well.
After Beta cohort is done, we will be opening up for new projects to run through DIF Labs. Together, we’re building the future of decentralized identity.
Author: Andor Kesselman
Created: 2024-12-10 Tue 17:26
San Ramon, CA, 06 December 2024 – The OpenID Foundation is proud to announce its return to the Gartner Identity and Access Management Summit, to be held in Grapevine, Texas, from December 9th to 11th, 2024.
Building on the significant success of its interoperability sessions at the previous Gartner Summit earlier this year in London, the OpenID Foundation will once again demonstrate the power and potential of its Shared Signals Framework (SSF) in securing digital identities and enabling real-time Identity Threat Detection and Response (ITDR).
The SSF is a game-changing open standard developed in the OpenID Foundation’s Shared Signals Working Group (SSWG). It is an open API across which different risk signals can be sent, including the Continuous Access Evaluation Profile (CAEP) and Risk Incident Sharing and Coordination (RISC). These enable applications and service providers to communicate and make dynamic access and authorization decisions. These protocols are providing the foundation for dynamic access control, improving response times to emerging threats.
Atul Tulshibagwale, Co-Chair of the OpenID Foundation’s Shared Signals Working Group and CTO of SGNL, expressed enthusiasm for the upcoming event: “There is a real industry shift towards collaborative security enabled by open standards, and since it was demonstrated at the last Gartner event, the Shared Signals Framework has rapidly gained traction. Many leading companies have now announced their support for this set of standards, and are in the process of implementing them.
“That’s why we are delighted to have been invited back by Gartner to run these interoperability sessions with implementers who will demonstrate how these protocols are enabling robust Zero Trust architectures and enhanced security through seamless, real-time collaboration.”
Gail Hodges, the Executive Director of the OpenID Foundation said, “Shared Signals is a critical new open standard to address the current threat environment, as recognized by a recent report from the CISA Cybersafety Review Board. Gartner’s invitation to the OpenID Foundation to demonstrate interoperability of Shared Signals is a fantastic step on the journey to global scale and interoperability, by engaging industry leaders at this Gartner IAM Summit.”
Attendees of Gartner IAM Summit are invited to participate in two key sessions:
Building a Trust Fabric with the OpenID Shared Signals Framework Monday, December 9th, 2:30 PM – 3:00 PM (CST).Led by the OpenID Foundation’s Atul, together with Garner VP Analysts Felix Gaehtgens and Erik Wahlstrom, this session will explore the role of SSF in building a secure trust fabric, demonstrating how organizations can leverage open standards to streamline security event communication.
Shared Signals Interop Demos: CAEP and RISC in Action Tuesday, December 10th, 12:00 PM – 12:30 PM, 2.00 PM – 2.30 PM, 3.45 PM – 4.15 PM (CST). Wednesday, December 11th, 12 PM – 12.30 PM, 1 PM – 1.30 PM, 2.45 PM – 3.15 PM (CST).Led by Atul and Elizabeth Garber of the OpenID Foundation, this interactive session will feature live demonstrations from leading implementers, showcasing how SSF, CAEP, and RISC work together to enable real-time ITDR.
The OpenID Foundation encourages all attendees to book time with interoperability participants, including industry leaders like AppOmni, caep.dev, Cisco, Delinea, Google, IBM, Jamf, Okta, Omnissa, SailPoint, Saviynt, SGNL, Thales, WinMagic. These one-on-one sessions will offer an in-depth look at how SSF is addressing real-world security challenges and laying the groundwork for a safer digital ecosystem.
To schedule a meeting with the OpenID Foundation and the interoperability participants, please contact us.
ENDS
For more information, please contact:
Serj Hallam E: serj.hallam@oidf.org
Elizabeth Garber E: elizabeth.garber@oidf.org
About The OpenID Foundation (OIDF)The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.
The post OpenID Foundation returns to Gartner IAM Summit to showcase Shared Signals interoperability in action first appeared on OpenID Foundation.
In 2023, the USAID Global Health Supply Chain Program-Procurement and Supply Management (GHSC-PSM) project, funded by the U.S. President’s Malaria Initiative, collaborated with Nigeria’s National Malaria Elimination Programme to pilot the use of GS1 standards in Calabar Municipality.
Business goal GS1 Healthcare Case Studies 2023-2024 gs1-healthcare-cases_studies_2024_nigeria_final_with_video_2711.pdfIn 1944, amid the devastation of World War II, representatives from 44 Allied nations met in Bretton Woods, New Hampshire, to design a new global financial system. They aimed to prevent the economic turmoil that had contributed to the Great Depression and the war. The result was a system where currencies were pegged to the U.S. dollar, which was convertible to gold—a framework intended to foster stability and growth.
By 1971, the Bretton Woods system collapsed. The U.S., facing mounting debts and dwindling gold reserves, ended the dollar’s convertibility to gold. This shift ushered in the era of fiat currencies—money backed by government decree rather than physical commodities. While offering monetary policy flexibility, fiat currencies led to inflation, currency devaluation, and financial crises. Today, the world grapples with over $300 trillion in debt and increasing economic instability.
In 2009, Bitcoin emerged as a decentralized electronic cash system introduced by the enigmatic Satoshi Nakamoto. Conceived as “digital gold,” Bitcoin is a scarce asset with a capped supply of 21 million coins. It operates independently of central banks, using blockchain technology to ensure transparency and security. Bitcoin’s design prevents arbitrary currency creation, aligning with the principles of a free and fair market.
Imagine earning a salary in Bitcoin. Each year, the amount might be smaller, but its purchasing power increases over time. This deflationary system complements technologies like artificial intelligence (AI) that grow more efficient over time. We cannot sustain a system where technology displaces jobs while money loses value. Bitcoin lays the foundation for a new financial paradigm—a stable, decentralized monetary system.
Despite its potential, Bitcoin faces challenges in scalability and programmability. Its security leads to slower transactions and limits complex financial operations like smart contracts. Solutions like Wrapped Bitcoin introduce centralization risks, undermining Bitcoin’s ethos. Enter BeL2 on Elastos—a Layer 2 solution that enhances Bitcoin’s functionality without compromising its principles. By transmitting information instead of transferring assets across chains, BeL2 allows native Bitcoin to talk with smart contracts and participate in decentralized finance (DeFi) applications like loans and stablecoins while maintaining security. It employs non-custodial BTC main-net locking scripts for decentralized collateral, zero-knowledge proofs for verification and decentralized arbiters backed in Elastos‘ ELA, the Bitcoin-secured reserve asset for trustless dispute resolution.
In 2024, Harvard students and alumni launched the New Bretton Woods (NBW) project to address the global debt crisis through decentralized solutions. Incubated at Harvard Innovation Labs, NBW aims to create a Bitcoin-backed stablecoin via BeL2, offering stability while preserving decentralization. This stablecoin lets users avoid Bitcoin’s price volatility while retaining long-term gains, making it practical for daily use. NBW reimagines Bitcoin not just as a store of value but as the foundation of a decentralized financial system. By integrating Bitcoin with smart contracts through BeL2, NBW supports new financial products, providing liquidity and stability to the global economy.
“Our goal is to create a ‘New Bretton Woods’ system anchored in Bitcoin,” said Jacob, Lead Member of NBW. “This initiative offers a decentralized and stable currency to help individuals and communities navigate the challenges of the global debt crisis.”
Building on these innovations, Elastos Founder and thought-leader Feng Han has proposed AI Kallipolis—a fully autonomous economic management AI inspired by Plato’s ideal city-state. AI Kallipolis operates without human intervention, capable of on-chain asset issuance and decentralized key management. It integrates artificial intelligence and blockchain to create a self-regulating economic order.
“AI Kallipolis functions as an impartial executor of market rules, free from external interests,” Feng Han explains. “It promises a more transparent marketplace where interactions between humans and AI are secured.”
This vision aligns with an AI Utopia where AI and human society coexist harmoniously through the Elastos SmartWeb, and powered on ELA via Arbiters and universal gas. The aim is for AI to advance technology while safeguarding human freedom and privacy. By leveraging Bitcoin’s decentralized properties as the base layer, it seeks to build an automatically managed economic system without intermediaries. Integrating AI and blockchain offers solutions to complex issues like the debt crisis. By optimizing resource allocation and enhancing transparency, these technologies can alleviate economic pressures without central banks. AI Kallipolis and NBW aim to foster a more sustainable and equitable economic model by combining these technologies.
Looking ahead, the convergence of AI and blockchain could bridge human and AI civilizations. Elon Musk’s vision of an interstellar society gains new meaning when considering a future where AI agents and humans operate within a unified digital economy. In this scenario, AI Kallipolis, underpinned by Bitcoin, serves as the bridge between carbon-based and silicon-based life. As this economy grows, Bitcoin may function as Ray Dalio predicts—addressing global debt challenges and fostering unprecedented economic resilience. We stand on the brink of a paradigm shift. The convergence of Bitcoin, advanced protocols like BeL2, and AI represents a transformative change in how we conduct finance and manage resources. This future offers independence and abundance.
As Feng Han notes, “Viewing all systems—biological, social, or economic—as computational processes, the development of advanced AI is not an anomaly but an inevitable outcome.” The question is not whether we’ll embrace this new narrative but how quickly we can realize its potential. The groundwork is laid. The technology is advancing. The narrative is spreading. Next week, we will share some developments on initial Kallipolis AI Agent solutions being worked on for Elastos and our smart chain. Stay tuned!
Are you ready for the paradigm shift? Did you enjoy this article? To learn more, follow Infinity for the latest updates here!
Decarbonizing the maritime industry is a challenging task due to the long lifespan of ships (around 35 years), which limits the speed of fleet replacement. This is why the focus has shifted to the use of low-emission fuels, enabled by a Chain of Custody model called “Book and Claim.”
The system allows for the decoupling of the physical low-emission fuel from its associated low-emission attribute, which can be traded separately. This approach makes low-emission fuels more accessible.
This model serves as an interim mechanism, buying time until global infrastructure is fully in place to support widespread physical use of low-emission fuels without needing to decouple the sustainability attributes of maritime fuel from its actual usage.
How?To build trust among participants, the Mærsk Mc-Kinney Møller Center for Zero Carbon Shipping and RMI have co-developed and continuously refined a robust methodology that all actors can rely on.
The role of Energy Web Foundation was to provide the technology platform in a way that shortened implementation timelines, enabled hypothesis testing, and allowed for necessary adjustments in a relatively short time compared to building everything from scratch.
Thanks to our Generic Green Proofs approach, we successfully developed two proof-of-concepts, gathering valuable insights that enabled the team to refine the solution into a full production platform — ready to launch in November at COP29 (link to the press release).
What might have taken years to build, we accomplished in months, delivering critical insights that shaped the final platform.
What Companies Can Do on the Platform IssuanceThe platform is designed for key maritime industry actors:
Shipping Companies: Operators or owners of ships are responsible for purchasing fuel and uploading relevant data to the platform. Freight Forwarders: Intermediaries in the maritime transportation cycle and vital parts of the supply chain. Cargo Owners: Responsible for the contents of shipments. What Will 2025 Bring?By the end of the year, we anticipate onboarding several companies to the platform. By early 2025, the first certificates will be issued using Katalist. Additional features will be developed, with updates shared as plans progress.
ConclusionEnergy Web contributes to decarbonization efforts by serving as a climate-tech partner, significantly accelerating the journey from ideas to proof-of-concepts and minimum viable products.
Our expertise lies in creating generic frameworks that enable faster deployments, adaptable to decarbonization plans that drive impactful climate change solutions.
If you’re interested in bringing your ideas to life faster and more efficiently using proven methodologies and connections with relevant industry actors, let’s discuss how Green Proofs can be applied to your use case. contact us!
Generic Green Proofs Use Case (Applied to the Maritime Industry): Katalist was originally published in Energy Web on Medium, where people are continuing the conversation by highlighting and responding to this story.
The non-profit sector loves to talk about equity and inclusion, but when it comes to Requests for Proposals (RFPs), it’s often a different story. We’ve found too many RFPs demand hours or even days of unpaid work from agencies in a competitive process that drains time and energy. For smaller agencies, worker-owned cooperatives like WAO and those working with marginalised communities, we see this as not merely unfair — it’s a gatekeeping tactic that shuts out new voices and reinforces the status quo.
What’s wrong with the traditional RFP process?Larger agencies and multinational corporations employ people specifically to win new work, meaning that RFPs have become a rigged game. Smaller agencies are expected to donate hours of unpaid labour just for the chance to land a project. It’s a raw deal. While we’re happy to spend time meeting the client and gathering some context, the time and resources we pour into project plans, timelines, budgets and the rest of the proposal requirements often end up as free idea banks for the organisation issuing the tender. For smaller organisations like ours , this means not being able to compete. In our opinion, the process stifles innovation and limits diversity by shutting out those who can’t afford to play.
Paid discovery phases: a better way forwardOne of the few organisations we’ve come across who have thought differently is Migration Exchange. They pay organisations and freelancers for their work as part of the RFP process, which if you think about it is a short discovery phase.
This approach is a simple, effective fix for an outdated process. Here’s how it works: organisations pay agencies to research, plan, and strategise at the outset. No free labour, no exploitation. This not only levels the playing field but creates stronger partnerships and better outcomes for everyone. Even a symbolic dayrate would encourage many smaller organisations to apply.
Why It Works Collaboration rather than exploitation — Paid discovery phases give agencies the breathing room to focus on delivering their best work, knowing they’re being fairly compensated. This creates space for honest, creative problem-solving rather than rushed, underfunded guesses. Fair play — Smaller agencies, often representing underrepresented voices, get to compete without worrying about unpaid work eating into their already tight budgets. This means more diverse ideas and solutions, which benefit everyone involved. Better, faster results — When agencies can afford to commit proper resources upfront, organisations get thoughtful, well-researched proposals instead of rushed pitches cobbled together at midnight. Paid discovery saves time and money down the road. Walking the talkRemember that this isn’t just about fairness for agencies, it’s also about equity for the communities they serve. Smaller agencies and cooperatives often have the deepest ties to marginalised groups, but they’re the ones most likely to be shut out by unpaid RFPs. By compensating discovery work, organisations ensure these voices aren’t drowned out by big-budget players with little connection to the people at the heart of the issue.
Examples Made by Kind swears by the paid discovery approach as a way to transform the way tech projects are handled, aligning expectations and improving outcomes. It reduces risks and helps deliver solutions that are actually fit for purpose. Open Contracting Partnership focuses on fairness and transparency in procurement to level the playing field for smaller organisations. Their work demonstrates how equitable practices can help ensure that even underfunded or smaller agencies have a fair shot at participating in the tendering process. Matchstick Legal advocates for paid discovery as standard practice to prevent exploitation and promote fairness. Their approach ensures agencies are compensated for initial work, creating a more equitable and sustainable RFP process. Change your approachIt’s time to call out the exploitation that has plagued RFPs for too long. If you’re issuing a tender, don’t ask for free labour — invest in the process. Paid discovery phases aren’t just a fairer way to work; they lead to smarter solutions and stronger partnerships. Let’s ditch the old ways and build something better.
Issuing a tender and need some help getting it right? Contact us! We can help you figure this out. Responding to a tender and want to figure out if it’s worth your time? Try Story Cube’s Pitch or Ditch Scorecard — at least until more tendering organisations get their act together around paid discovery processes!Rethinking RFPs was originally published in We Are Open Co-op on Medium, where people are continuing the conversation by highlighting and responding to this story.
This year has been packed with incredible supply chain stories showcasing innovation, collaboration, and inspiring moments in supply chain logistics.
In this episode, hosts Reid Jackson and Liz Sertl take you through their favorite conversations of the year, featuring insights from industry leaders like Gena Morgan, Dr. Darin Detwiler, and Chuck Lasley. They discuss key topics that defined the year—data quality, traceability, retail automation, and the UPC barcode—all while looking ahead to what’s next in 2025.
In this episode, you’ll learn:
Key trends that influenced supply chains in 2024
Innovations driving transparency and traceability
Exploring the future of 2D barcodes and data quality
Jump into the conversation:
(00:00) Introducing Next Level Supply Chain
(01:48) 2D barcodes, data quality, and GS1 standards
(05:20) E-commerce and supply chain challenges
(06:37) Improving traceability and food safety
(11:04) The adoption of the barcode and its multiple uses
(13:11) Liz’s favorite episodes
(15:14) Reid’s favorite episodes
Connect with GS1 US:
Our website - www.gs1us.org
December 2024
DIF Website | DIF Mailing Lists | Meeting Recording Archive
Table of contents Decentralized Identity Foundation News; 2. Working Group Updates; 3. Open Groups; 4. Announcements at DIF; 5. Community Events; 6. DIF Member Spotlights; 7. Get involved! Join DIF 🚀 Decentralized Identity Foundation News Block Contributes Digital Identity Components to DIFBlock makes a significant contribution to the decentralized identity ecosystem by sharing key Web5 components with DIF, including their DID method (did:dht), Verifiable Credentials implementations, and Decentralized Web Nodes. Read more:
Block Contributes Digital Identity Components to the Decentralized Identity Foundation This content is also posted on Block’s blog In support of its decentralized identity work, Block is contributing foundational components developed under the Web5 umbrella to the Decentralized Identity Foundation (DIF). For the past several years, Block has been developing a number of open source components to push decentralized identity Decentralized Identity Foundation - BlogFoundation DIF 2024 Hackathon: Winners Announced!The results are in! Our 2024 hackathon showcased incredible innovations across Verifiable AI, Proof of Personhood, Education & Workforce, and more. Check out the winning projects and see how participants tackled real-world challenges with decentralized identity. View the full results:
DIF 2024 Hackathon: Winners Announced! The DIF 2024 Hackathon showcased incredible talent and innovation. Participants tackled real-world challenges across Verifiable AI, Proof of Personhood, Education & Workforce, Privacy-Promoting Credentials using Zero-Knowledge Proofs, Decentralized Storage, and Frictionless Travel. Congratulations to all the winners and a big thank you to our sponsors for making this event possible! Our Decentralized Identity Foundation - BlogFoundation Algorand Foundation Joins DIFThe Algorand Foundation strengthens its commitment to open standards by joining DIF, bringing their did:algo method to our community. This collaboration advances both organizations' goals of creating standardized, secure digital identity solutions. Read all about it:
Algorand Foundation Joins Decentralized Identity Foundation to Advance Digital Identity Standards This content was originally posted on Algorand’s blog. The Algorand Foundation has joined the Decentralized Identity Foundation (DIF), strengthening its commitment to open standards in digital identity. This move follows the development of did:algo, a decentralized identifier (DID) built on Algorand. Algorand’s DID method exemplifies the Foundation’s commitment Decentralized Identity Foundation - BlogFoundation The Rise of DIDComm and its impact on Key Industries preparing for eIDASExplore how DIDComm's formal verification is revolutionizing key industries preparing for eIDAS regulation. From finance and industrial machinery to travel and government services, discover how this secure communication protocol is enabling privacy-preserving, efficient interactions across borders. Learn more:
The Rise of DIDComm and its impact on Key Industries preparing for eIDAS How key industries are preparing for eIDAS with DIDComm The successful formal verification of DIDComm paves the way for tremendous DIDComm adoption. To help provide an understanding of this important technology, we’ve outlined some of the industries where DIDComm can play an important role. To learn more about DIDComm, Decentralized Identity Foundation - BlogFoundation 🛠️ Working Group Updates 📓 DID Methods Working GroupIdentifiers and Discovery meets bi-weekly at 9am PT/ noon ET/ 6pm CET Wednesdays
💡Identifiers and Discovery Working GroupIdentifiers and Discovery meets bi-weekly at 11am PT/ 2pmET/ 8pm CET Mondays
🪪 Claims & Credentials Working GroupThe Credential Schemas work item meets bi-weekly at 10am PT / 1pm ET / 7pm CET Tuesdays
🔐 Applied Crypto Working GroupThe DIF Crypto - BBS work item meets weekly at 11am PT/2pm ET /8pm CET Mondays
📦 Secure Data StorageDIF/CCG Secure Data Storage WG - DWN Task Force meets bi-weekly at 9am PT/12pm ET/6pm CET Wednesdays
If you are interested in participating in any of the Working Groups highlighted above, or any of DIF's other Working Groups, please click join DIF.
📖 Open Groups at DIF Veramo User GroupMeetings take place weekly on Thursdays, alternating between Noon EST / 18.00 CET and 09.00 EST / 15.00 CET. Click here for more details
🌏 APAC/ASEAN Discussion GroupThe DIF APAC call takes place Monthly on the 4th Thursday of the month. Please see the DIF calendar for updated timing.
🌍 DIF AfricaMeetings take place Monthly on the 3rd Wednesday at 1pm SAST. Click here for more details
📢 Announcements at DIF Join the DIDComm Interop-a-thonBe part of advancing interoperability in decentralized communication! Join DIF's upcoming Interop-a-thon on December 12, testing DIDComm V2 features and collaborating with peers. Whether you're developing, deploying, or experimenting with DIDComm, this is your chance to shape the future of secure communication. Register and learn more:
Join the DIDComm Interop-a-thon Pushing Forward Interoperability in Decentralized Communication The Decentralized Identity Foundation (DIF) is thrilled to invite organizations, developers, and community members to participate in an exciting event centered around fostering interoperability within the DIDComm ecosystem. What is DIDComm? DIDComm (Decentralized Identifier Communication) is an open standard designed for secure, private peer-to-peer Decentralized Identity Foundation - BlogFoundation 🗓️ ️DIF MembersNuggets Introduces Private AI Identity Solutions: Nuggets unveils two groundbreaking solutions - Private Personal AI and Verified Identity for AI Agents - designed to bridge the human-AI interface gap while maintaining privacy and security. Read more.
Cheqd Partners with ID Crypt Global and ADEX for DeFi Trust: Three DIF members join forces to enhance trust and security in DeFi trading. The partnership brings trusted identity verification and reputation systems to ADEX's crypto trading platform. Read all about it.
Port of Bridgetown Implements Dock's Verifiable Credentials: The Port of Bridgetown has integrated Dock's Verifiable Credential technology into their Maritime Single Window, revolutionizing vessel clearance processes with secure, tamper-proof digital credentials. Read the full article .
Anonyome Labs Launches Updated Website Anonyome Labs has refreshed their online presence with a newly updated website at https://anonyome.com/. Check out their latest solutions for privacy-preserving digital identity.
👉Are you a DIF member with news to share? Email us at communication@identity.foundation with details.
New Member Orientations
If you are new to DIF join us for our upcoming new member orientations. Please subscribe to DIF’s eventbrite for upcoming notifications on orientations and events.
🆔 Join DIF!If you would like to get in touch with us or become a member of the DIF community, please visit our website or follow our channels:
Follow us on Twitter/X
Join us on GitHub
Subscribe on YouTube
🔍
Read the DIF blog
New Member OrientationsIf you are new to DIF join us for our upcoming new member orientations. Find more information on DIF’s slack or contact us at community@identity.foundation if you need more information.
On September 19, 2024, a fact-based discussion took place on the progress of Switzerland’s state electronic identity (E-ID). The network policy evening was organized by the digital society and highlighted the key challenges and opportunities of a project that is set to shape the country’s digital future. Speakers included Annett Laube, Professor of Computer Science at Bern University of Applied Sciences, Rolf Rauschenbach, Information Officer for E-ID at the Federal Office of Justice, and Daniel Säuberli, President of the Digital Identity and Data Sovereignty Association (DIDAS).
The Vision for a State E-IDSource: Der Nutzen der E-ID
The state-operated E-ID is more than a technological tool; it is envisioned as a foundation for trust and efficiency in the digital space. Following the rejection of the privately run E-ID proposal in 2021, a state-led solution focusing on privacy, data sovereignty, and user-friendliness has taken center stage. The goal is to create a digital identity that meets citizens’ needs while establishing the technological and legal groundwork for innovative digital services.
Contributions from the ExpertsProf. Annett Laube provided insights into the technological and scientific standards required for the development of the E-ID. She emphasized the importance of ensuring privacy and security through principles like privacy by design and highlighted the critical role of transparency enabled by open-source solutions. User binding is the secure process of linking a digital identity to its rightful owner, ensuring only authorized individuals can access and use the E-ID while maintaining privacy and trust.
Rolf Rauschenbach outlined the strategic and political considerations underpinning the project. He discussed the challenges of implementing a solution that is both technically robust and user-friendly and stressed the need for a clear legal framework.
Daniel Säuberli, representing DIDAS, drew attention to the essential role of trust in digital infrastructures. He emphasized three fundamental principles necessary for the success of the E-ID:
Strengthening Data Sovereignty: Citizens must have full control over their personal data, without unnecessary interference by third parties. Promoting Interoperability: The E-ID must function on both a national and international level to enable seamless cross-border digital interactions. Building Trust Through Transparency: Open communication and active collaboration with the public, private sector, and academia are critical to gaining widespread acceptance for the E-ID. Ambition Levels for the E-IDSource: Zielbild E-ID
The development of the state E-ID is structured around three ambition levels, each reflecting its growing capabilities and societal value:
Basic Functionality: At this level, the E-ID serves as a secure and user-friendly tool for digital identification. It is used primarily for core applications, such as accessing federal online services like tax filings or registry extracts, and ensures reliable authentication for administrative processes. Integration into Cantonal and Municipal Services: The second level expands the E-ID’s use to services provided by cantons and municipalities. The goal is to offer citizens seamless digital access to public services across all administrative levels. Examples include registering address changes, participating in electronic voting, or applying for local permits. This integration transforms the E-ID into a vital part of Switzerland’s public administration. Adoption by the Private Sector: The highest ambition level envisions the E-ID as a universal identification tool for private-sector services. This includes applications in areas such as online banking, e-commerce, and healthcare. By creating a digital ecosystem where the E-ID is widely accepted, it becomes a driver of innovation and digitization across various industries. Connecting Principles to Ambition LevelsThe ambition levels of the state E-ID are intrinsically linked to the principles outlined by Daniel Säuberli:
Data Sovereignty forms the foundation of the basic functionality, ensuring citizens retain control over their data. Interoperability becomes essential as the E-ID is integrated into cantonal and municipal services, enabling seamless operation across different platforms. Transparency is vital for the third level, fostering trust and accountability as the E-ID expands into the private sector. Challenges and OutlookDespite significant progress, critical questions remain: How can the E-ID be effectively integrated into existing systems? How can transparency and privacy be guaranteed? And how can public trust be secured? Addressing these challenges will require collaboration, clear communication, and an unwavering commitment to the outlined principles.
DIDAS: A Driving Force for Digital TrustFor DIDAS, the state E-ID represents a cornerstone for promoting digital trust and innovation. As a platform for digital trust infrastructures, DIDAS is committed to advancing the development of the E-ID and creating the conditions for a sovereign digital society.
With its state-operated E-ID, Switzerland has the opportunity to become a leader in digital identities—rooted in the values of trust, security, and innovation.
ConclusionThe state E-ID offers Switzerland the chance to establish a modern and trustworthy digital ecosystem centered on the needs of its citizens. By strengthening data sovereignty, ensuring interoperability, and fostering transparency, the E-ID is a crucial step forward in the country’s digital transformation.
However, its success depends on transparency, collaboration, and clear communication to build trust and public acceptance. DIDAS plays a pivotal role by bridging stakeholders and promoting the principles of digital trust and data sovereignty. Through a clear vision and open dialogue, Switzerland can set a global example for a digital future built on trust, innovation, and user empowerment.
GS1 GDSN accepted the recommendation by the Operations and Technology Advisory Group (OTAG) to implement the 3.1.31 standard into the network in May 2025.
Key Milestones:
As content for this release is developed it will be posted to this webpage followed by an announcement to the community to ensure visibility.
Data Pools should contact the GS1 GDSN Data Pool Helpdesk to understand the plan for the update. Trading Partners should work with their Data Pools on understanding the release and any impacts to business processes.
Business Message Standards including Message Schemas Updated For Maintenance Release 3.1.31Trade Item Modules Library 3.1.31
GS1 GDSN Code List Document (Dec 2024)
Delta ECL for release 3.1.31 (Dec 2024)
Validation Rules (Jan 2025)
Delta for Validation Rules (Jan 2025)
Approved Fast Track Attributes
Unchanged for 3.1.31
BMS Documents Carried Over From Previous Release
BMS Catalogue Item Synchronisation
BMS Basic Party Synchronisation
Schemas
Catalogue Item Synchronisation Schema including modules 3.1.31
Trade Item Authorisation Schema
Release Guidance
GS1 GDSN Attributes with BMS ID and xPath
Packaging Label Guide (Jan 2025)
GPC to Context Mapping 3.1.31 (Nov 2024) May GPC publication
Delta GPC to Context Mapping 3.1.31 (Nov 2024) May GPC publication
GS1 GDSN Unit of Measure per Category
Unchanged for 3.1.31
Warning Messages Presentation (Mar 2024)
Flex Extension for Price commentary (Dec 2018)
Any questionsWe can help you get help you get started using the GS1 standards
GS1 GDM SMG voted to implement the 2.12 standard into production in November 2024.
Key Milestones:
As content for this release is developed it will be posted to this webpage followed by an announcement to the community to ensure visibility.
GDSN Data Pools should contact the GS1 GDSN Data Pool Helpdesk to understand the plan for the update. Trading Partners should work with their Data Pools (if using GDSN) and/or Member Organisations on understanding the release and any impacts to business processes.
GDM 2.12 contains updated reference material aligned with ADB 2.6 and GDSN 3.1.29.
Updated For Maintenance Release 2.12
GDM Standard 2.12 (Nov 2024)
GDM Local LayersChina - GSMP RATIFIED (April 2022)
France - GSMP RATIFIED (November 2023)
Germany - GSMP RATIFIED (November 2023)
Poland - GSMP RATIFIED (November 2023)
Romania - GSMP RATIFIED (December 2021)
USA - GSMP RATIFIED (February 2023)
Finland - GSMP RATIFIED (November 2023)
Netherlands - GSMP RATIFIED (November 2024)
Italy - GSMP RATIFIED (May 2024)
Release Guidance
GDM Market Stages Guideline (June 2023)
GDM Attribute Implementation Guideline (November 2024)
GPC Bricks to GDM (Sub-) Category Mapping for GDM 2.12 (August 2024)
Attribute Definitions for Business (November 2024)
GDM (Sub-) Categories (October 2021)
GDM Regions and Countries (17 December 2021)
GDSN Release 3.1.29 (November 2024)
ToolsGS1 GDM Attribute Analysis Tool (May 2024)
GDM Local Layer Submission Template (May 2024)
Training Future Release DocumentationGPC Bricks to GDM (Sub-) Category Mapping for GDM 2.12 (August 2024)
About this standard Any questionsWe can help you get help you get started using the GS1 standards
Completed the build of the first Citopia Node and Citopia Self-Sovereign Digital Twin™ (SSDT™), demonstrating how any third-party services, such as a Zero-Knowledge Proof (ZKP) of location service, can be deployed and utilized on a Citopia Node.
January 2024: Kickoff of Global Battery Passport (GBP) Minimum Viable Product (MVP)MOBI and its members kicked off a three-year initiative to develop Citopia Global Battery Passport System (GBPS). This initiative involves leveraging Citopia and Integrated Trust Network (ITN) services to demonstrate secure battery data exchange and identity validation between participants and test core functionalities like selective disclosure (where data is shared only with intended recipients). The goal is to facilitate seamless coordination and communication throughout the battery value chain for enhanced circularity, accountability, efficiency, and regulatory compliance. Read the Citopia GBPS 1-Pager
February 2024: ITN Release 0.3.0 Complete refactoring and architecture of the ITN SSDT to a highly modular design based on the Aries Agent Framework. Upgraded to DIDComm v2 communication protocol. February 2024: Held MoCoTokyo in partnership with AWS and DENSOOn 19 February 2024, MOBI hosted MoCoTokyo in collaboration with Amazon Web Services (AWS) and DENSO, offering a one-of-a-kind summit for industry leaders to network; share solutions; and collectively explore critical challenges and opportunities at the forefront of the circular economy transition. Presentations from key industry players delved into diverse use cases such as decentralized energy systems and electric vehicles, all underscored by the imperative of integrating Web3 technologies for Global Battery Passport (GBP) implementation. Read the event report
March 2024: Joined the eSTART CoalitionThe Electronic Secure Title and Registration Transformation (eSTART) Coalition is a group of leading auto industry organizations united in advocating for modern solutions to replace the paper-based processes that currently dominate state and local DMV operations. Modernizing these processes will result in significant cost and time savings for consumers, state and local DMV operations, and industry participants. Read the press release
March 2024: Launched the MOBI Members PortalThe MOBI Members Portal is a hub for our members to access essential meeting minutes, Working Group documents, in-progress deliverables, demos, and more. Read the User Guide
April 2024: ITN Releases 0.3.1Major upgrades to the network and the ITN SSDT regarding new functionality, code refactoring, documentation extension, and test coverage. In particular, adding Arbitrum One, a public Ethereum Layer 2 scaling solution, also known as an Optimistic Rollup, as a new verifiable data store for the Decentralized Identifiers (DIDs) anchored on the ITN besides Hyperledger Fabric, a private DLT network.
April 2024: Announced Interoperability Pilot with Gaia-X 4 moveIDMOBI and Gaia-X 4 moveID announced a joint initiative to advance cross-industry interoperability. The initiative focused on the joint implementation of two pioneering MOBI standards — MOBI Vehicle Identity (VID) and MOBI Battery Birth Certificate (BBC). More specifically, the initiative centered around linking physical objects — e.g., vehicles and their parts such as batteries — to Web3 digital identities and credentials. Read the press release
May 2024: Completion of Phase I-Stage 1 of the GBP MVPIn Stage 1, implementers demonstrated the ITN identity services of one-to-one cross-validation for battery identity and data. The ITN serves as a federated (member-built and operated) registry for World Wide Web Consortium (W3C) Decentralized Identifiers (DIDs), offering Self-Sovereign Identity (SSI) management for connected entities such as batteries and their value chain participants. Read the press release
June 2024: Kickoff Workshop for Phase I-Stage 2 of the GBP MVPThis pivotal meeting, hosted in partnership with DENSO, brought together member companies united in their commitment to driving circularity, compliance, and resilience across the battery sector through the development of the GBP MVP. Members reviewed the work done in Stage 1 and prepared for Stage 2.
July 2024: ITN Releases 0.3.2Minor release that updated primarily the ITN storage network besides overall node improvements.
September 2024: ITN Releases 0.3.3Minor release. Added primarily expanded DID management capabilities and continuous node improvement.
October 2024: Released the Battery Birth Certificate (BBC) Technical Specifications V1.0 The BBC schema is a cornerstone of MOBI’s Global Battery Passport (GBP) system, which is being developed and tested with public and private partners worldwide. This first version outlines the necessary static data attributes for compliance with regulations such as the EU Battery Regulation and the CARB ACC-II Regulation. Access the complete standard October 2024: Published the MOBI Web3 White Paper V4.0Read the updated White Paper here!
November 2024: Published the draft charter for the Artificial Intelligence (AI) Working Group The rapid evolution of AI presents critical opportunities across the digital landscape. We’ve already begun to see radical changes to business efficiency, data privacy, digital trust, and regulatory compliance—movements that portend seismic market shifts to come. This year, we held several workshops to discuss the potential for an AI Working Group within MOBI. Next year, we’re formally launching the Working Group to discuss AI-related challenges and opportunities, co-develop standards, and monitor pertinent regulations globally. The AI Working Group will be held during MTS meeting hours. Read the draft charter November 2024: Completed the first year of a three-year initiative for the Citopia Global Battery Passport System (GBPS)In the first year of the GBPS initiative, MOBI and its members concentrated on understanding global regulatory requirements and developing use cases aligned with Web3 technology for secure battery data management. Core technical achievements include the implementation of verifiable credentials, selective data disclosure, track and trace of asset ownership, and secure data exchange. Read the Citopia GBPS 1-pager
December 2024: Released Battery State of Health (SOH) Labeling and Certification White PaperBattery SOH is an important variable that not only defines the performance of the batteries but also stands as one of the key factors in economic decisions related to the resale, recycling, repurposing, and reuse of such batteries and battery-powered devices. Developed with members of the Electric Vehicle Grid Integration (EVGI) II Working Group, this white paper offers guidance on the current state of practice and proposes a framework for SOH labeling and certification in line with critical regulations. Read the complete White Paper
December 2024: ITN Release 0.3.4 Minor Release. Primarily added support for OpenIDConnect W3C VC issuance draft standard implementation and OpIDConnect W3C VP presentation draft standard implementation. December 2024: Completing Phase I: Stage 2 of the GBP MVP During Phase I, MOBI and its members concentrated on understanding global regulatory requirements and developing use cases aligned with Web3 technology for secure battery data management. Core technical achievements include the implementation of verifiable credentials, selective data disclosure, track and trace of asset ownership, and secure data exchange. By running nodes and testing selective disclosure, the team validated a decentralized framework where sensitive information can be securely shared among authorized parties without risking intellectual property. Building on the success of Phase I (2024), MOBI and its partners are now advancing to Phase II (2025), which will deepen the system’s capabilities for battery data exchange.The post 2024 Report: MOBI Milestones first appeared on MOBI | The New Economy of Movement.
In today’s world, corporations are under immense pressure to reduce their carbon footprints, meet regulatory standards, and fulfill consumer demand for greener products. In doing so, they must balance the challenge of providing good and verifiable data that stands up to external scrutiny with protecting their proprietary information and processes.
Corporations are pursuing a variety of sustainability solutions to achieve these goals, which vary in terms of flexibility, cost and compliance with the standards of large-scale enterprises and regulators.
To address these complex sustainability needs, Energy Web developed Green Proofs — a powerfully comprehensive and configurable software solution — to bring deep levels of transparency and verifiability to emerging green products and markets.
Green Proofs technology is built to enable the following:1. Buy and sell low carbon services and commodities: Whether your company deals in biofuels, green energy, or climate-conscious services, Green Proofs helps you sell and source products that can be verifiably marketed as sustainable.
2. Launch green product registries: Create transparent, scalable, and credible market access in “hard to abate” sectors that can benefit from attribute tracking and support broader decarbonization.
3. Prove your company and products are sustainable: Green Proofs provides tools for tracking and reporting progress toward environmental goals. It integrates granular data, helping you measure Scope 3 emissions and evaluate supplier impact while protecting sensitive information.
Green Proofs technologyGreen Proofs is a suite of modular technology solutions that come in the form of turn-key applications or highly customized software built in close partnership with Energy Web.
A core technology underpinning Green Proofs software is the Energy Web Worker node network. Energy Web developed worker nodes to solve a longstanding problem that hindered the advancement of energy tracking solutions: solution logic varied widely according to use case and relied on commercially sensitive data that oftentimes needed to remain private, but the results needed to be transparent and publicly verifiable.
Worker nodes address this problem by allowing enterprises to configure their own computing networks that:
Ingest data from external sources Execute custom logic workflows Vote on results in order to establish consensus without revealing or modifying the underlying data. Publish the consensus to a trusted, public ledgerWorker nodes put enterprises in the driver’s seat of their application, giving them granular control over workflow logic and data inputs. The end result is an enterprise-friendly architecture that provides cryptographic proof that pre-defined rules and processes are being followed correctly, while preserving data privacy and integrity.
It is important to note that while Energy Web has historically been known for its use of blockchain, worker nodes do not exist to store or tokenize certificates and data on the blockchain. The primary use of blockchain in Green Proofs is to serve as a ledger for the worker nodes’ validation results of logic workflows.
Green Proofs Business ApplicationsGreen Proofs helps companies demonstrate that their operations, products, and services are sustainable. It achieves this in three ways:
Helping Companies Buy and Sell Low-Carbon Services and CommoditiesGreen Proofs streamlines access to low-carbon services and commodities, empowering companies to source materials and services that both support their sustainability goals and enable the marketing/sale of verifiably green products. Currently, Green Proofs is streamlining market access to sustainable aviation fuel, green EV charging, low-carbon shipping services, and climate-aligned Bitcoin mining, making it easier for companies to find the right solutions to reduce their Scope 1, 2, and 3 emissions.
The recently-announced Katalist platform is an excellent example of this business application — using Katalist, maritime freight customers can lower their emissions using a robust book-and-claim system, supporting corporate sustainability claims about themselves and their products.
Launch Green Product RegistriesFor companies or consortia looking to establish or expand markets for emerging green commodities, Green Proofs offers the ability to launch next-generation Green Product Registries. These registries support transparent, scalable tracking of sustainable goods and services, enabling participants to securely book, trade, and retire digital certificates that represent specific environmental attributes.
By deploying these customized registries, companies can support sustainable product verification, drive growth in emerging green markets, and contribute to decarbonization efforts on a larger scale. With the flexibility and transparency offered by Green Proofs, businesses can lead the way in expanding sustainable options for their industry while building trust with customers and stakeholders.
To learn more about this business application, we invite you to explore the SAFc Registry, where users can obtain certificates representing the use of sustainable aviation fuel, which are then used to credibly claim Scope 3 emissions reductions.
Track and Report EmissionsTo truly demonstrate sustainability, companies need more than metrics; they need verifiable data that tracks the environmental impact of their products, operations, and supply chains (scope 1, 2 and 3 emissions). Green Proofs can help companies better collect, track, and report this data in detail, offering insights on both corporate-level and product-specific emissions. We are currently working with industry partners to gather requirements for this business application .
Who can use Green Proofs?Energy Web built Green Proofs for any corporation or organization that wants to market itself or its products as green. We help customers access Green Proofs in a variety of ways — from no-code and low-code solutions that enable quick, independent launches to providing ongoing design, development, and hosting services for bespoke platforms. To discuss possibilities for applying Green Proofs to your use case, please contact us!
Green Proofs: a 360° View was originally published in Energy Web on Medium, where people are continuing the conversation by highlighting and responding to this story.
The post The Velocity Network Trust Framework appeared first on Velocity.
Victus Capital, a leading Web3.0 venture capital firm fueling the growth of companies through investment partnerships by providing not only capital but also strategic and marketing support through a vast network, has officially partnered with Cyber Republic DAO LLC following a successful proposal to strengthen the Elastos ecosystem. This collaboration is unique—it marks one of the first significant deals with a Venture Firm involving a legally registered DAO, the Cyber Republic, utilizing Elastos’ Bitcoin-merged mined coin (ELA) for network governance. Elastos employs a unique governance model where ELA holders vote for delegates representing the community for a year. Community suggestions transform into seven-day countdown proposals requiring consensus—a process epitomizing decentralized decision-making and aligning with fundamental democratic principles.
Through phased capital contributions and a managed market-making approach, Victus Capital will help stabilize the ELA token while facilitating the acquisition of USDT stablecoins. This strategy mitigates downward sell pressure on ELA, supporting its value and boosting investor confidence. The USDT reserves will fund vital initiatives, including ecosystem development, developer incentives, and comprehensive marketing efforts designed to promote Elastos’ market presence. Leveraging Victus Capital’s growth team and expansive network addresses the essential needs of resource acquisition and strategic expansion. This partnership positions Elastos to make significant strides toward its mission of creating a secure, decentralized, and user-owned internet.
A Unique Alliance to Accelerate Elastos’ MissionThe internet serves as a global platform for information exchange, but centralized control over data undermines user privacy and autonomy—essential rights in the digital age. Elastos envisions a decentralized internet where users regain control over their data, secured by the reliability of Bitcoin miners. Achieving this vision requires overcoming foundational challenges like limited resources, market volatility, and the critical need for widespread adoption. Partnering with entities committed to decentralization and innovation is crucial. The alliance with Victus Capital is not just strategic but essential for transforming the internet’s core structure and making people aware of the technology they can use.
Building a Decentralized Future Through Strategic InvestmentTraditional funding methods, like liquidating native tokens such as ELA for operational funding, exert downward pressure on the token’s value and hinder growth efforts. To preserve ELA’s intrinsic value and ensure market stability, Cyber Republic DAO and Victus Capital have devised a strategy to support the acquisition of stablecoin reserves (USDT).
This approach addresses the need for sustainable funding without devaluing existing assets. It provides liquidity to invest in essential areas: ecosystem development, developer incentives, and targeted marketing—all fundamental for expanding Elastos’ reach. Victus Capital’s extensive network of key opinion leaders, exchange partnerships, and development expertise fulfills the fundamental requirement of increasing visibility and adoption in a competitive landscape. By focusing on these core elements, the partnership establishes a robust framework for organic and sustainable growth, ensuring every step aligns with basic economic principles and technological innovation. Did you enjoy this article? To learn more about this partnership, read the CRC proposal here and visit the Victus Capital website and Twitter. Follow Infinity for the latest updates here!
Source: Biometric Update Website
Digital Identity New Zealand (DINZ) hosted a discussion led by NEC New Zealand on “Facial Recognition and CCTV Integration in Retail Security” this week, with a panel including data privacy, government and legal experts.
It comes amidst industry anticipation of the New Zealand Privacy Commissioner’s conclusions on supermarket chain Foodstuff’s results from its trial of facial recognition. The company considers the preliminary findings of the trial encouraging.
In Australia, retail chain Bunnings made headlines after it was found to have breached the country’s privacy laws by using facial recognition. However, the company received some unexpected support when 78 percent of nearly 11,000 respondents supported the company’s use of the technology.
In the discussion, Campbell Featherston, a partner at law firm Dentons New Zealand, mentioned that it is easier to deploy facial recognition technology (FRT) in New Zealand rather than Australia due to differences in law.
Commenting on the Bunnings case, Featherston remarked that the Australian retailer must obtain consent when collecting biometric data such as from FRT. “The absence of consent under Australian law means that it is very difficult for Bunnings to roll out facial recognition technology,” he said.
“The need for consent doesn’t apply [in New Zealand],” Campbell said, who commented that it wouldn’t surprise him if the privacy law had to change in Australia, such as removing the need for consent in order to accommodate the use of FRT.
Ross Hughson, managing director of Personal Information Management, was asked what was driving the use of FRT amongst retailers. “The key driver is health and safety of staff,” he replied, pointing to managers’ responsibility over their employees’ health and safety.
Featherston elaborated on how best to comply with New Zealand’s Privacy Act 2020, mentioning privacy impact assessments (PIA), having a good understanding of privacy safeguards, having human oversight of the technology, and properly trained staff. He suggested that users of the technology should be sure of the purpose of what they’re trying to achieve through its use, and to avoid “purpose creep.” Transparency is also important and retailers should have notices at the entrance of stores, and “customer-facing documentation,” he said.
The senior lawyer made the point that issues can arise even in the absence of technology. He brought up the example of security guards trying to identify people based on grainy CCTV footage, which could lead to misidentification.
Dr. Vica Papp, principal data scientist at MBIE, was invited to talk about racial biases. This is a chief concern of New Zealand Privacy Commissioner Michael Webster as the accuracy of FRT for minority populations with darker skin is an issue.
Papp recognized that biases exist and the importance of training for staff on unconscious biases. She said that with FRT it can be a physics-based issue rather than that of “race” as people with darker skin tend to reflect less light, picking up on “light receptivity” and how this may impact FRT. But she advised that retailers should find a product that doesn’t show discrimination, that it can handle the specific local population, and to train and test the system on “well-curated data sets.”
The post New Zealand lawyer ‘not surprised’ if Australian laws change for retail biometrics use appeared first on Digital Identity New Zealand.
We're looking to talk to organizations — especially those working on climate — who have been targeted by disinformation campaigns or impacted by disinformation campaigns.
The post Contribute to our latest project: Social justice organizations based in Africa and Latin America impacted by disinformation campaigns appeared first on The Engine Room.
Together with Friends of the Earth (England, Wales and Northern Ireland) or EWNI, we recently brought together climate and environmental justice campaigners, digital rights advocates, and other engaged activists. We discussed a set of shared principles for how we, as a collective, can think about how we apply AI technology, and how we think about AI in alignment with our values. The event, a Greening AI Roundtable, was fantastic.
our intro slideWe were pleased to have four amazing folks come and lead the discussion:
Marcus Berdaut, Creative Producer at the Upsetters Marie-Therese Png, AI Ethics PhD Candidate at Oxford Samantha Ndiwalana, Ranking Digital Rights Research Lead at the World Benchmarking Alliance shawna finnegan, Environmental justice lead at the Association for Progressive CommunicationsAnd the audience was just as amazing with people from a variety of campaigning and justice organisations, universities and non-profits. Everyone was eager to understand where the conversation of AI + Sustainability currently is and where it might be going.
A First Draft overview slide of the projectIn the lead up to our Greening AI Roundtable, WAO carried out desk research and engaged in user research interviews to build a set of draft principles. The draft was informed by reports, articles, and papers from organisations who have carried out deep work in this area, including Friends of the Earth, the Joseph Roundtree Foundation, the Association for Progressive Communication and many more.
Our shared principles aim to guide activists and campaigners in making informed decisions about their work. These seven key considerations provide a foundation when it comes to AI development, deployment and use in activist spaces, and hopefully, everywhere else. A couple weeks before the Roundtable, we sent these draft principles to our invited experts, hoping to provoke statements that would provide constructive criticism on the principles, as well as help the audience understand the complexities of the topic and the pathways forward.
We were not disappointed. We have some work to do to improve the first draft, and we were ecstatic for the thoughtful, articulate criticisms and praise the principles received.
Here are the DRAFT 7 Principles and brief descriptions we asked our panelists to comment on:
The Appetite for Community cc-by-nd Bryan Mathers for WAOWhat was striking during the Roundtable was not only the willingness of participants to dive into the topic but also the eagerness to have this space for the conversation. Though the Roundtable was a full 90 minutes long, it felt too short. It was clear that this was the beginning of something important.
There’s an opportunity for us as climate and digital rights advocates to co-create new strategies and campaigns, share knowledge, and support one another in navigating this rapidly evolving landscape. Together we can think about what a better future looks like and fight the behemoth of Big Tech and Big Business to have a say in how our world is designed. Indeed, it’s what activists have always done — present a different future, find a way to resist, suggest alternatives.
We need one another, and we need a community of practice where organisations can come together to:
Share best practices and lessons learned from AI use and implementation Collaborate on research projects exploring the intersection of AI and environmental justice Develop educational resources and workshops for capacity building Engage with policymakers, industry leaders, and other stakeholders to advocate for responsible AI developmentWe believe that such a community has the potential to drive meaningful change. We’re currently exploring ways that we might bring together and support such a community. Perhaps with a community call? Such an initiative requires support, so if you have ideas about funding such an initiative, we’d be all ears!
Coming SoonAs organisations working to protect the planet and promote human rights, it’s essential that we have an open conversation about AI in our societies. We are now working on an article for the Friends of the Earth (England, Wales and Northern Ireland) website in which we will unpack the complexities around “ethical” or “responsible” AI. We’re aiming for publication in January.
We’ll continue to update you on our progress! If you have ideas or suggestions for how we can make this work stronger, please don’t hesitate to reach out.
Activists, Campaigners and Advocates versus AI was originally published in We Are Open Co-op on Medium, where people are continuing the conversation by highlighting and responding to this story.
In Fall 2023, the DIACC, in collaboration with Agriculture and Agri-Food Canada and the University of Guelph, launched a Special Interest Group (SIG) focused on enhancing traceability in the agri-food sector through digital tools. The Digitizing Traceability of Agriculture and Food SIG convened nearly sixty organizations to discuss how emerging technologies, including blockchain, artificial intelligence (AI), and verifiable credentials, can improve transparency and trust across supply chains. Through a series of virtual sessions, stakeholders shared insights on digitization’s role in advancing food traceability and establishing secure data-sharing frameworks.
Download the report here.
DIACC-DTAF-SIG-ReportThis content is also posted on Block's blog
In support of its decentralized identity work, Block is contributing foundational components developed under the Web5 umbrella to the Decentralized Identity Foundation (DIF). For the past several years, Block has been developing a number of open source components to push decentralized identity forward and return ownership of data and identity to individuals.
This contribution includes open source repositories for Decentralized Identity (including the did:dht DID method), Verifiable Credentials, and Decentralized Web Nodes (DWNs):
Decentralized Identifiers (DIDs): An open standard created by the World Wide Web Consortium (W3C), DIDs are self-generated and self-owned identifiers that enable identity authentication in a decentralized world. They are comparable to an email address or username, yet are not owned and controlled by a company or stored on centralized servers. They cannot be deleted or altered by anyone except for the individual who owns them. Verified Credentials (VCs): Verified credentials (VCs) are digital certificates of claims that are easily shared in a private and secure way. They can prove your legal name, age, ownership of an asset, or anything, really. These can convey the same information as physical credentials like a driver’s license or membership card, but in a more tamper-resistant, digital format. Decentralized Web Nodes (DWNs/ DWeb Nodes): With DWeb Nodes, the world now has access to an emerging standard for decentralized storage of app and personal data. This technology was developed by TBD and other contributors in open source organizations. DWeb Nodes enable developers to build decentralized apps and protocols where individuals truly own and control their data.These components will now reside in DIF’s ecosystem, where they can be further developed and supported by the open source community of decentralized identity experts. DIF, a leading organization developing decentralized identity standards and technologies, will foster these innovations and enable broader adoption by the identity community.
“We are honored to provide a home for these technologies, which are rapidly becoming the backbone of new decentralized applications. As a non-profit foundation committed to decentralized identity, DIF can ensure longevity of these components in a transparent, community-led environment,” said Kim Hamilton Duffy, Executive Director of DIF
“DIF has the right balance of builders and standards creators, commitment to decentralized identity and open source, and ability to incubate the Web5 SDK and components like DWNs and the did:dht method - where the community can continue to help shape and benefit from these technologies. We look forward to seeing the community’s innovations with these foundational tools and to making decentralized identity accessible to all,” said Manik Surtani, Block Open Source Program Office lead.
As one of the most mature, full-featured decentralized DID methods, did:dht is already advancing toward formal standardization. With support from DIF, W3C, Trust Over IP Foundation, and other leading organizations, this collaboration will bring Web5 innovations closer to reality and promote a robust, privacy-centric digital ecosystem.
Block and DIF are committed to empowering developers, organizations, and communities through open source and open standards.
Cambridge, MA — Tuesday November 26th 2024. As Bitcoin soars past its all-time highs, capturing global attention and sparking renewed interest in decentralized finance, a team of Harvard students and alumni called the New Bretton Woods (NBW) Labs has released a whitepaper detailing the development of a Bitcoin-backed stablecoin using the Bitcoin Elastos Layer 2 (BeL2) protocol.
Developed by Elastos, a SmartWeb ecosystem provider, BeL2 is presented in the NBW whitepaper as central to a Bitcoin-powered SmartWeb, which will redefine global finance, echoing the impact of the original Bretton Woods Agreement of 1944. Leveraging the resources available through a Harvard Innovation Labs student membership, the NBW whitepaper explains how NBW aims to replace inflation-prone fiat currencies with a stable, Bitcoin-backed currency and provides accessible, transparent, and programmable financial tools for everyone.
At the core of this project is the Native Bitcoin Stablecoin (NBS), a Bitcoin-collateralized asset pegged 1:1 to the U.S. dollar. This stablecoin allows Bitcoin holders to collateralize and unlock stablecoin liquidity without selling their BTC or entrusting it to a third-party custodian, preserving wealth and providing a hedge against inflation. The NBW Bitcoin-backed stablecoin on Bitcoin-Elastos Layer 2 (BeL2) aims to provide global stability and accessibility by creating an open financial platform available to anyone with internet access, granting individuals financial sovereignty in multiple ways.
“We believe that a Bitcoin-backed stablecoin is not only possible but necessary,” said Jacob Li, Head of Operations at NBW. “By using Bitcoin as collateral, we can offer a stable and secure financial instrument that empowers users without compromising the fundamental principles of decentralization.”
Built on the Bitcoin-Elastos Layer 2 (BeL2) protocol, the system combines Bitcoin’s unparalleled security and decentralization with Elastos’ scalable and interoperable Bitcoin-backed SmartWeb innovation. Elastos’ BeL2 protocol enables Bitcoin to communicate with other EVM-compatible blockchains without wrapping or bridging assets or using third-party custodians. This technology enables non-custodial Bitcoin collateralization via mainnet locking scripts, verified with Zero-Knowledge Proofs for privacy. Oracles bridge proofs to EVM-compatible smart contracts for a service, while decentralized, collateralized Arbitrator nodes manage time-based operations and dispute resolution.
Advantages of NBW Approach
Liquidity without selling BTC: Bitcoin holders can mint NBS using their BTC as collateral, accessing liquidity while retaining ownership. This approach is ideal for those looking to preserve long-term exposure to Bitcoin’s appreciation without selling. Hedge against inflation: NBS serves as an accessible, decentralized store of value, hedging against inflation without the need to convert to fiat currency. Access to Decentralized Finance (DeFi): Once minted, NBS can be utilized across DeFi platforms and smart contracts for lending, borrowing, rights trading, and liquidity provision.The NBW system not only anchors its stablecoin on Bitcoin’s security but also supports Satoshi Nakamoto’s 2010 vision of merge-mining through Elastos’ technology. This consensus mechanism allows Bitcoin miners to simultaneously secure the Elastos network without extra computational effort. With 44.58% of Bitcoin’s global hashrate merge-mining Elastos—including major pools such as Antpool, ViaBTC, F2Pool, and Binance—it achieves exceptional security, making its cryptocurrency, ELA, one of the most secure Bitcoin derivative assets worldwide. ELA additionally serves as collateral for Arbitrator nodes, enabling them to earn BTC rewards by supporting BeL2 transactions, such as dispute resolution for loans, and by facilitating stablecoin operations like NBW, which help maintain the peg and manage liquidations.
“People are seeking more ways to utilize their Bitcoin,” said Sasha Mitchell, Head of Operations at Bitcoin Elastos Layer 2. “Unlocking its dormant value with a stablecoin through Elastos’ solutions opens new possibilities for users to engage with decentralized finance without compromising security.”
Use Case for Native Bitcoin Stablecoin
This symbiotic relationship with Bitcoin establishes a new decentralized finance (BTCFi) model, where Bitcoin’s trust and security power scalable tools like NBW stablecoins and loans. NBS’s stability and programmability enable its use as a medium of exchange for everyday transactions, expanding decentralized finance’s accessibility. Institutions and governments can leverage NBS as a reserve asset, liquidity management tool, or framework for their own stablecoin solutions, benefiting from its transparency, decentralization, and ability to strengthen financial resilience while supporting strategic reserves.
By combining Bitcoin’s strengths with decentralized finance, the NBW system aims to restore financial stability, promote economic inclusion, and unlock the multi-trillion-dollar value dormant in Bitcoin today. The Native Bitcoin Stablecoin is a stepping stone toward a fairer and more resilient financial future.
Access the Whitepaper: https://www.nbwlabs.org/Whitepaper.pdf
Learn More: Visit www.nbwlabs.org
Contact Us: Email contact@nbwlabs.org
About New Bretton Woods (NBW)
The New Bretton Woods project is an independent initiative led by several Harvard students and alumni, currently being developed through a student membership of the Harvard Innovation Labs during the Fall 2024 semester. By merging Bitcoin’s security with Elastos’ innovative technology, NBW aims to redefine the global financial system and introduce a stablecoin designed for the digital age.
Thank you for being part of this journey as we bring native Bitcoin DeFi closer to reality! Did you enjoy this article? Follow Infinity for the latest updates here!
The Digital Identity unConference Europe (DICE) 2024 took place this June in Zürich, welcoming global experts to discuss not just secure digital identities but the creation of an integrated ecosystem of trust. Over three sunny days at Trust Square, attendees worked to connect digital trust to real-world economic advantages by focusing on authentic data, verifiable proofs, and practical applications that go beyond mere identification. This was made possible through the collaboration of Trust Square, DIDAS, and IIW, emphasizing the power of partnerships in advancing digital trust.
A New Direction for Swiss Digital IdentityBundesrat Beat Jans, head of the Swiss Federal Office of Justice and the leading figure behind Switzerland’s e-ID project, opened the conference with updates on the nation’s progress. Switzerland is advancing its secure digital identity framework while maintaining technology-neutral principles, ensuring future adaptability.
The new e-ID law, under parliamentary review, defines the required trust infrastructure without mandating specific technologies, providing issuers with the flexibility to choose tailored solutions. This approach strengthens market responsiveness, protects investments, and ensures seamless integration with emerging innovations.
However, balancing security and accessibility remains a challenge. By limiting the initial rollout of the e-ID to a government-developed open-source wallet, Switzerland aims to provide robust security while ensuring transparency. Plans to engage with the OpenWallet Foundation and promote open hardware crypto processors reflect Switzerland’s commitment to digital sovereignty and international standardization.
Beyond Identity: Building a Trust EcosystemDICE 2024 highlighted that the economic value of digital trust lies in creating a broader ecosystem—not just digital identities. Verifiable credentials, authentic data, and secure proofs were discussed as enablers of trust in industries like healthcare, finance, and logistics.
By emphasizing data integrity, industries can streamline processes, reduce fraud, and build consumer trust. For example, verifiable credentials can authenticate professional qualifications or certifications in real-time, while trusted data channels improve supply chain transparency. DICE underscored the need for governance models to maintain consistency and reliability in such ecosystems, paving the way for more secure, efficient, and scalable solutions.
Economic Implications of Trust EcosystemsThe trust ecosystem proposed at DICE 2024 offers significant economic benefits:
Fraud reduction: Improved verification lowers financial losses. Streamlined compliance: Simplified KYC and AML processes reduce administrative burdens. Efficiency gains: Accelerated verification boosts operational efficiency and customer experiences. Market expansion: Trust ecosystems create opportunities in sectors previously hindered by security concerns. Innovation stimulation: Open standards encourage the development of interoperable solutions.By linking trust-building efforts to tangible economic outcomes, DICE 2024 shifted the conversation from abstract concepts to measurable benefits.
The DICE unConference ExperienceThe open and collaborative format allowed attendees to co-create the agenda, ensuring discussions addressed real-world challenges. Key topics included
Interoperability: Cross-border and cross-platform systems to streamline international transactions. Zero-knowledge proofs: Privacy-preserving verification methods. Authentic data sharing: Applications in industries like healthcare and finance. User-centric solutions: Accessibility for all users, including those with older technology. Governance frameworks: Clear standards for public-private collaboration.These discussions highlighted the importance of aligning technological innovation with user needs and industry-specific applications.
Looking Ahead: DICE 2025To build on the momentum of 2024, DICE has announced two major events for 2025:
DICE Ecosystems | March 4-5, 20252. DICE 2025 | September 2-4, 2025
Focus: Advancing technologies and frameworks for digital identity and trust.
DICE 2024 clarified that digital identity and trust are not standalone constructs—they drive real economic advantages. By linking trust ecosystems to fraud reduction, compliance efficiency, and market expansion, the conference demonstrated how secure systems empower businesses and consumers alike.
The Path ForwardWhile the digital trust landscape evolves, collaboration and innovation remain vital. As DICE continues to shape the agenda, the focus will be on actionable strategies that enhance security, scalability, and inclusivity.
For more information and updates on upcoming events, visit the DICE official website. Let’s continue to collaborate and innovate to forge a digital future that benefits everyone.
DIDAS HSLU Danube Tech GmbH zkdid Budapest University of Technology and
MAIN SPONSOR: Associate sponsors & partners: fundamentals sponsors: govermental endorsers:
Sponsors keep conference fees low, by supporting the virtual platform, unConference set-up, providing meals and more, making DICE available to all who want to attend, participate and contribute.
If you are interested in becoming part of the growing community of Sponsors supporting DICE and the real time work that happens at this event, please contact the Trust Square team.
Via Marte’s extensive variety of products and very fast product renewal cycles made it challenging to efficiently manage operations and logistics.
A consistent use of GS1 standards to identify individual products and their movements across the supply chain with high levels of data availability.
100% accurate real-time inventory data, highly efficient stock management, a 4% reduction in shipping costs—and more.
case-study-gs1-brazil_via-marte.pdfThe SAFc Registry represents more than just a technical solution that Energy Web developed — it’s a mission-driven effort to decarbonize aviation. As a non-profit-driven book and claim registry, it’s designed to ensure transparency, accountability, and scalability in the deployment of SAF. From the outset, our goal has been to create a trusted system for tracking and verifying SAF certificates, enabling the whole aviation supply chain to credibly participate in the energy transition.
A Groundbreaking Beginning
When we launched the SAFc Registry, it was a first in many ways for Energy Web. It was the inaugural implementation of a Green Proofs registry, setting the stage for the recent launch of Katalist, the Green Proofs registry we built for sustainable maritime shipping. The SAFc Registry also broke new ground with the innovative use of worker node technology for data validation — a distributed, multi-party approach that ensures reliability and trust.
Energy Web’s role initially focused on providing the technology behind the registry, but as the platform matured, so did our involvement. We’ve since stepped into the role of SAFc Registry Administrator, cementing our commitment to driving this critical initiative forward day-to-day.
Building MomentumWhile the launch in Q4 2023 was exciting, the real momentum picked up in Q1 2024, and the past year has been a whirlwind of activity and growth:
Engagement and Education: We held countless introduction calls and demos, spreading awareness about the registry and its potential impact across the aviation and energy industries. Global Adoption: Hard work paid off as we onboarded 50 companies from across the globe and a diverse range of industries. Onboarding and Operations: We refined our onboarding pipeline and operational processes, ensuring a smooth experience for users. Impact in Numbers: To date, the SAFc Registry has facilitated over 50 SAFc issuances, representing over 3,000 tonnes of SAF that has been produced and is at work displacing the use of conventional jet fuel. Continuous Improvement: We’ve listened closely to user feedback, rolled out new features (including an API for registry power users), resolved bugs, and refined our terms and conditions to better meet user needs. Governance: Convening our governing board for the first time was a key step in formalizing the policies and processes that will guide the registry’s growth into the future. Lessons LearnedLike any pioneering effort, launching the SAFc Registry came with its share of challenges and learning opportunities. The v1 version of the registry taught us what worked — and what didn’t. These lessons have been invaluable as we’ve evolved the platform to better serve our users. The enhancements we’ve deployed reflect our commitment to listening, learning, and adapting.
Looking Ahead
Our work is far from over. Just this week, we’ve rolled out a new statistics section to the homepage providing users and the broader public with a clearer view of registry activity. It’s just one of many features in development as we work through a robust backlog of user-requested improvements and policy updates from our governing board.
The future of SAF is bright. Announced SAF projects are expected to increase the global supply by more than 10x by 2030. The SAFc Registry is poised to play a central role in supporting this growth, ensuring the scalability and credibility of SAF adoption on a global scale.
A Heartfelt Thank YouNone of this would be possible without the incredible support of our partners, users, and team. As we celebrate this milestone, we’re filled with gratitude and optimism for what lies ahead. Here’s to the continued success of the SAFc Registry and the advancement of sustainable aviation.
Let’s keep flying higher — together.
About Energy Web
Energy Web is a global technology company driving the energy transition by developing and deploying open-source decentralized technologies. Our solutions leverage blockchain to create innovative market mechanisms and decentralized applications, empowering energy companies, grid operators, and customers to take control of their energy futures.
Celebrating One Year of the SAFc Registry: A Look Back and Forward was originally published in Energy Web on Medium, where people are continuing the conversation by highlighting and responding to this story.
The Engine Room is seeking an experienced, curious, and team-oriented communications person to assume the evolving role of leading communications at The Engine Room.
The post [CLOSED] Join our team! We’re looking for our next Associate for Communications appeared first on The Engine Room.
This content was originally posted on Algorand's blog.
The Algorand Foundation has joined the Decentralized Identity Foundation (DIF), strengthening its commitment to open standards in digital identity. This move follows the development of did:algo, a decentralized identifier (DID) built on Algorand.
Algorand’s DID method exemplifies the Foundation's commitment to standardized identity solutions, as did:algo was built to the DID Spec from W3C (the World Wide Web Consortium). The implementation of did:algo enables individuals and organizations to generate their identifiers using the Algorand blockchain, providing a secure and permissionless system for identity management. The method supports both direct identity creation and third-party verification, adding flexibility to identity management approaches.
did:algo is already being used in Kare Wallet, enabling survivors of natural disasters to quickly confirm their digital identities and get efficient access to the aid and recovery resources they need, across multiple organizations and entities.
DIF leads the development of decentralized identity infrastructure through technical specifications and open standards. The organization coordinates engineers and technical teams worldwide to create specifications for digital identity systems that work across different platforms and technologies. DIF is building an ecosystem where individuals and entities control their digital identifiers without relying on centralized systems.
Bruno Martins, Principal Architect at Algorand Foundation, explains the strategic importance of this collaboration: "Contributing and aligning to open standards defined by bodies such as DIF enables us to ensure interoperability with the rest of the digital world, influence industry direction, foster innovation, and build trust. All the while reducing costs, aligning with regulatory standards, and avoiding creating unnecessary custom solutions that keep us in an ecosystem bubble."
Kim Hamilton Duffy, Executive Director of DIF, notes that Algorand’s involvement will advance core technical efforts: “At DIF, our mission is to build digital identity solutions that prioritize individual control, privacy, and security. Algorand’s technical expertise and commitment to decentralized identity accelerate our progress, strengthening our joint vision for decentralized infrastructure that empowers users with secure and trusted identity systems through open standards.”
The Foundation will continue to share updates on decentralized identity developments, particularly focusing on implementations that leverage Algorand's infrastructure for secure, scalable identity solutions.
In this PREDICT 2025 USA interview, Andrew Shikiar, Executive Director and CEO, FIDO Alliance, discusses how the industry has been exploring the death of the password for decades, how this conversation has evolved and where we are with passkeys today – pinpointing why making progress with eliminating dependence on passwords is of paramount importance.
Watch the interview with Andrew Shikiar on “The Future of Payment Authentication.”
How dow we get from this—
To this—
?
By making customers independent.
Hmm… maybe The Independent Customer should be the title of my follow-up to The Intention Economy.
Because, to have an Intention Economy, one needs independent customers: ones who are in charge of their own lives in the digital world:
Who they are—to themselves, and to all the entities they know, including other people, and organizations of all kinds, including companies. What they know about their lives (property, health, relationships, plans, histories)—and the lives of others with whom they have relationships. Their plans—for everything.: what they will do, what they will buy, where they will go, what tickets they hold, you name it.Add whatever you want to that list. It can be anything. Eventually it will be everything that has a digital form.
What will hold all that information, and what will make that information safely engageable with other people and entities?
A wallet.
Not a digital version of the container for cash and cards we carry in our purses and pockets. Apple and Google think they own that space already, which is fine, because that space is confined by the mobile app model. Wallets will be bigger and deeper than that.
Wallets will embody two A’s: archives and abilities. Among those abilities is AI: your AI. Personal AI. One that is agentic for you, and not just for the sellers of the world.
Interesting harbinger: Inrupt now calls Solid pods “wallets.” (Discussion.)
Wallets are how we move e-commerce from a world of accounts to a world of independent customers with personal agency. With AI agents working for them and not just for sellers.
In his latest newsletter, titled ‘A-Commerce’ will be the biggest disruption since the web, and Digital Wallets are the new accounts, Jamie Smith says this:
The Web3 crowd say digital wallets are about transferrable digital assets and ownership without a central authority. And they are right.
But there’s more.
Many payments and identity experts will say that digital wallets are really about identity. Proving who you are and what you are entitled to do (tickets, access). Maybe even with fancy selective disclosure features.
They are also right. But that’s not the whole picture.
A pioneering group of others believe that digital wallets are really about the portability of any verifiable information, and digital authenticity.
And they too are right. We’re now getting much, much closer to what I’m talking about. But there’s still more.
Once individuals can show up independently, with their own digital tools – digital wallets with verifiable, data, identity and digital assets – then we have something new, something special.
It’s a New. Customer. Channel.
Once a business asks for some data from a customer’s digital wallet, they have the opportunity to form a new digital connection with that customer.
A persistent one.
A verifiable one.
A private one.
An auditable, secure and intelligent one.
My goodness, what business wouldn’t want that? Imagine plugging that customer connection directly into business systems and processes, like CRM.
Yes, digital wallets can hold and manage assets. And identity. And portable, verifiable, authentic data.
But with the narrower ‘data and assets’ framing, we risk missing the larger market opportunity.
Digital wallets become the new account.
For everything.
OK so what is an account?
With money, it’s a shared and trusted record of all your transactions. Who did what, who paid what, and who owes who.
With business, it’s a shared record of all your products and interactions. It’s a critical customer channel and interface. The place people come to check things. To ask things. To ‘do business’.
Each customer account has a number. A unique identifier. It has a way to message customers. A way to record what’s been sent to, and received from, the customer.
Ring a bell?
Digital wallets will be able to do all this and much more.
They will also be more secure. More private. More flexible. And more portable.
So it’s possible – I’d even argue more likely – that digital wallets may be more disruptive than browsers were in the 1990s.
But like browsers, they will first be misunderstood.
Digital wallets will become the new account.
For business? For government? For banking? For health? For travel?
For life.
I have said for over a decade that the only 360° view of the customer, is the customer.
Just imagine, once a customer can bring their own wallet – their own account – to each business:
The economics change. Why would a business maintain a complex and proprietary account platform when digital interactions can be handled – indeed automated – via a verifiable digital wallet that’s available on every smart device? The data flows change. Why would a business store unnecessary customer data when they can just ask for it on demand, with consent, from the customer’s digital wallet? Then delete it again once used? The risks change. What if we could reduce fraud and account takeover to near zero, when every customer interaction has to be authenticated via the customer’s digital wallet (likely with biometrics)?The very fabric of the customer relationship changes.
This is just a glimpse of what‘s possible, and what’s coming. Especially when you tie it to digital AI agents….
When you look closely, you’ll see that digital wallets aren’t even The Thing. They are ‘below the surface’ of the customer channel.
Lots to be written about that. Coming soon.
For now, it’s a simple switch: when you hear ‘account’, just think ‘wallet’.
Here is the challenge: making wallets a must-have: an invention that mothers necessity.
We’ve had those before, with—
PCs word processors and spreadsheets the Net and the Web, graphical browsers personal publishing and syndication smartphones and apps streams and podcasts.Wallets need to be like all of those: must-haves that transform and not just disrupt.
It’s a tall order, but—given the vast possibilities—one that is bound to be filled.
As for why this won’t be something one of the bigs (e.g. Apple and Google) do for themselves, consider these five words you hear often online:
“Wherever you get your podcasts.”
Those five words were made possible by RSS.
It’s why all of the things in the bullet list above are NEA:
Nobody owns them Everybody can use them Anyone can improve themWhen we have wallets with those required features, and they become inventions that mother necessity, we will have truly independent customers.
And we will finally prove ProjectVRM’s prime thesis: that free customers are more valuable than captive ones—to themselves and to the marketplace.
Kia ora,
While the US elections took the media limelight this month, I’d like to remind our members of another election of more local significance—Digital Identity NZ’s annual election to vote candidates onto our Executive Council—so we can continue the drive towards a robust and equitable digital identity ecosystem.
Voting closes next Monday 25th, with successful candidates announced at the Annual Meeting on December 5th.
Register for DINZ Annual MeetingWelcoming New Members
A warm welcome to our newest members: api connects, Deloitte, and SushLabs. It’s great to see our community growing, and we look forward to the fresh perspectives they bring.
Banking in the Spotlight – ANZ and BNZ Announcements
Major corporate members ANZ and BNZ were in the news this month with banking tech-related statements—ANZ announcing its partnership with Qippay, and BNZ announcing its acquisition of Blinkpay soon after the release of its new anti-scam app to reduce fraud.
Open Banking is gaining traction in Aotearoa, with 80% of consumer bank accounts now covered by open banking initiatives. This data was surfaced in the recently launched OpenFinanceANZ report and ecosystem map which was supported by our member PaymentsNZ alongside MasterCard, Fintech Australia and our NZTech Group partner FinTechNZ. Check out FinTechNZ’s report highlights here.
DISTF Milestone and DINZ’s Support
Hon. Judith Collins announced the finalisation of the Digital Identity Services Trust Framework (DISTF), with the milestone picked up internationally by Biometric Update. The announcement contained a link to this sub-site for the Trust Framework with new details regarding the accreditation process hitherto unseen by industry.
Despite our initial surprise, DINZ released its own statement in support as we prepare to re-engage with officials. DINZ’s DISTF Working Group is the forum for accreditation discussions and we’re here to support our members with DISTF education. We encourage organisations with views regarding DISTF accreditation to get in touch.
Code of Practice for Inclusive and Ethical Digital Identity
DINZ is pleased to release an advanced draft of its code of practice for the inclusive and ethical use of digital identity. This code of practice provides a roadmap for ethical and inclusive digital identity practices in New Zealand.
It benefits DINZ members by providing a framework for responsible conduct, and the broader digital identity ecosystem by fostering trust, promoting inclusivity, and ensuring alignment with national and international standards. This code is a work in progress, shared to spark reflection and dialogue. Together, let’s shape a future where digital identity empowers and respects the rights and dignity of all people in New Zealand.
In order to progress to a published code of practice, we need your feedback. We encourage our members to read the draft and provide feedback here >>
Read Draft Code of PracticeDIA Training Schedule Confirmed
Identification management plays a core role in our work, and members should have a foundational level of understanding about what it is and how it impacts your customers. Good identification management helps reduce and/or prevent fraud, loss of privacy and identity theft, by applying good practices and processes.
Topics covered in DIA Training Courses:
Here is the schedule of courses between November and February.
Online learning at your own pace:
Half-day Zoom courses:
Thursday 5 December | 9am-12pm | G3 & G4 Wednesday 22 January | 9am-12pm | G1 & G2 Wednesday 26 February | 9am-12pm | G3 & G4Interested in signing up for any of the Zoom sessions? Email identity@dia.govt.nz with the G or HD reference number. A Zoom link will be supplied to those registered.
Addressing Bias in Biometrics
Recent media coverage regarding claims of racial bias in facial biometrics has prompted the DINZ Biometrics Special Interest Group to look into the feasibility of a consented dataset of Kiwi faces using its independence to take the role of custodian, facilitating subsequent software testing to address bias and accuracy.
Modest funding will be required so please get in touch if you would like to support this critical work. It’s essential for all public and private sector organisations deploying biometrics. On that note, we’re looking forward to NEC’s webinar next Tuesday.
Reflecting on 2024 and Looking Ahead
This is the last newsletter for 2024—published during International Fraud Awareness Week—where progress on Digital Identity is essential for reducing the impact of scams in Aotearoa, as it plays a crucial role in enabling broader participation in the digital economy.
The DISTF Act’s implementation, the Customer Product and Data bill with open banking and possibly electricity as sectors designated for regulation, revisions to the AML regime, Next Generation Payments, the emergence of digital identity acceptance networks, the digital farm wallet for the rural sector, and the digital drivers licence coming ever closer, have all contributed to the growing vibrancy and diversity of an emergent digital trust ecosystem. It’s good progress but there’s more work to do.
We’ll resume our newsletters again after the holidays, so we close the year out with the Coffee Chat in a fortnight. So on behalf of the Executive Council and the DINZ team, we wish you a Meri Kirihimete me te tau hou.
Ngā mihi,
Colin Wallis
Executive Director, Digital Identity NZ
Read the full news here: Council Elections, DISTF Milestone, and End-of-Year Highlights
SUBSCRIBE FOR MOREThe post Council Elections, DISTF Milestone, and End-of-Year Highlights | November Newsletter appeared first on Digital Identity New Zealand.
The DIF 2024 Hackathon showcased incredible talent and innovation. Participants tackled real-world challenges across Verifiable AI, Proof of Personhood, Education & Workforce, Privacy-Promoting Credentials using Zero-Knowledge Proofs, Decentralized Storage, and Frictionless Travel.
Congratulations to all the winners and a big thank you to our sponsors for making this event possible!
Our winnersSee our DevPost Gallery for the list of winners!
Special thanks to our sponsorsWe'd like to thank our sponsors for the support in all ways, including crafting fascinating challenges for our participants to learn and up-level their skill set.
Gold tier sponsors Digital Credentials Consortium Jobs for the Future Foundation Pinata Privacy + Scaling Explorations Silver tier sponsors ArcBlock Truvity Vidos Bronze tier sponsors Anonyome cheqd Crossmint NetSys Ontology TBD/Block Tooling sponsors Trinsic Thanks to our volunteersWe'd like to thank our volunteers who helped with scoring the submissions:
Advait Patel, Senior Site Reliability Engineer (DevSecOps + Cloud + AIOps) Dolores-Mai Macauley, Software Developer Mengyi (Gloria) Wang, Affiliated Contributor, the Center for Law, Tech, and Social Good at USF Thanks to our judges and mentorsThank you to our judges, who also encouraged and assisted the participants throughout the event:
As the saying goes, malicious actors don’t break in—they log in. There’s a significant truth in that statement. Today, many organizations struggle to protect their staff from credential phishing, a challenge that’s only grown as attackers increasingly execute “MFA bypass” attacks.
In an MFA bypass attack, threat actors use social engineering techniques to trick victims into providing their username and password on a fake website. If victims are using “legacy MFA” (such as SMS, authenticator apps, or push notifications), the attackers simply request the MFA code or trigger the push notification. If they can convince someone to reveal two pieces of information (username and password), they can likely manipulate them into sharing three (username, password, and MFA code or action).
Make no mistake—any form of MFA is better than no MFA. But recent attacks make it clear: legacy MFA is no match for modern threats. So, what can organizations do? Sometimes a case study can answer that question.
Today, CISA and the USDA are releasing a case study that details the USDA’s deployment of FIDO capabilities to approximately 40,000 staff. While most of their staff have been issued government-standard Personal Identity Verification (PIV) smartcards, this technology is not suitable for all employees, such as seasonal staff or those working in specialized lab environments where decontamination procedures could damage standard PIV cards. This case study outlines the challenges the USDA faced, how they built their identity system, and their recommendations to other enterprises. Our personal favorite recommendation: “Always be piloting”.
FIDO authentication addresses MFA-bypass attacks by using modern cryptographic techniques built into the operating systems, phones, and browsers we already use. Single sign-on (SSO) providers and popular websites also support FIDO authentication.
Passkeys allow users to log in to their secure accounts without passwords. Ecommerce businesses were first in line when the FIDO Alliance introduced passkeys in 2022. The trade association, which stands for Fast ID Online, launched in 2012 with a mission to reduce the world’s password reliance.
Andrew Shikiar, executive director of FIDO, said the past two years have been momentous for members and ecommerce businesses. “You want to attract customers to your site and protect them from account takeover, credential stuffing, and phishing attacks,” he said. “That’s why PayPal, eBay, Amazon, Walmart, Best Buy, and other ecommerce companies were the earliest adopters of passkey payments.”
Shikiar noted that passkey awareness has risen from 39% in 2022 to 57% in 2024, according to a FIDO survey of 10,000 consumers in the U.S., U.K., France, Germany, Australia, Singapore, Japan, South Korea, India, and China.
Watch the video to learn how to go passwordless with passkeys.
What is a passkey? A passkey is a FIDO authentication credential based on FIDO standards, that allows a user to sign in to apps and websites with the same process that they use to unlock their device (biometrics, PIN, or pattern). Passkeys are FIDO cryptographic credentials that are tied to a user’s account on a website or application. With passkeys, users no longer need to enter usernames and passwords or additional factors. Instead, a user approves a sign-in with the same process they use to unlock their device (for example, biometrics, PIN, pattern).
Learn more about the benefits of using passkeys and how to get started with passkeys by visiting the FIDO website: https://fidoalliance.org/passkeys/
For more passkey-related resources, visit passkeycentral.org today: https://www.passkeycentral.org/home
We all know passwords are frustrating to use, and not safe. Passkeys are the replacement for passwords. Strong cryptographic security behind passkeys prevents phishing attacks, reduces security breaches and account takeovers. Passkeys make sign-ins fast, simple, and secure. Passkeys also sync easily across all user devices, including new ones. Passkeys for businesses reduce IT time, avoid desktop hassles, and there’s no more costly password resets.
Ready to switch to passkeys?
Visit Passkey Central today to get started: https://www.passkeycentral.org/home
Greetings Elanauts! As we approach the close of 2024, we are thrilled to share progress and updates on Elastos’ BeL2 Arbiter Protocol—a key element of BeL2’s mission to provide a trustless, decentralised financial infrastructure for native Bitcoin (NB) DeFi. While development is ongoing, here’s an update on the anticipated features and the roadmap toward delivery.
Key Features of the Arbiter Protocol Asset Versatility for Arbiter Registration: The upcoming protocol will support Elastos (ELA) and BPoS NFTs as acceptable assets for staking as margin. This focused asset support strengthens the connection to the Elastos ecosystem while incorporating BPoS NFTs to support additional mainchain staking and mainchain ELA rewards. Governance for DApp Integration: DApps will need to register and receive approval to use the protocol. This process will either be overseen by a designated administrator or structured through a DAO protocol, ensuring fairness and collective decision-making. Further details regarding governance will be refined collaboratively with the community. Transaction Management and Fees: Each DApp transaction using the protocol will designate an arbiter and incur a handling fee. The fee will depend on the arbiter’s pledged assets and the transaction duration. For instance, at a 10% annualised rate, a 6-month transaction requiring 10,000 ELA pledged would generate a handling fee of 500 ELA. Future Incentive Mechanisms: At present, the protocol does not include a token issuance mechanism for arbiters. This element, part of the broader incentive system, will be evaluated during future development. Development and Rollout Roadmap Finalising Product Features (3–4 weeks remaining): The development team is focused on completing the functional components of the Arbiter Protocol. Internal Testing and Refinements (~2 weeks): Comprehensive testing will follow, ensuring functionality and addressing any issues identified. Alpha Release (Internal Testing & Feedback): A limited release will enable stakeholders to provide feedback on the protocol’s design and performance. Beta Release (Community Testing & Feedback): The protocol will then be opened to the community for further testing and input to fine-tune its operation. Looking AheadThe Arbiter Protocol delivers the final piece of BeL2’s decentralised clearing service. By introducing execution through dispute resolution and time-based transaction handling, it completes the system’s foundation. This capability adds to the already established components of collateralisation, verification, and cross-chain communication, offering a fully operational framework for NB DeFi.
The Role of the Arbiter System in Decentralized Clearing Collateralisation via Locking ScriptsWith Version 1 nearing completion, our focus will turn to refinement and ecosystem growth:
Reducing ZKP Generation and Verification TimesThank you for being part of this journey as we bring native Bitcoin DeFi closer to reality! Did you enjoy this article? Follow Infinity for the latest updates here!
What started as a college project has now become a pantry staple sold in over 5,000 stores.
In this episode, Matt Pittaluga, Co-Founder of Hank Sauce, joins hosts Reid Jackson and Liz Sertl to share how a homemade hot sauce grew into a beloved national brand.
Matt explains how traceability and consistency have been key to scaling the business while keeping product quality high. Through detailed product codes and a robust production database, Hank Sauce tracks every ingredient from batch creation to store shelves, ensuring full transparency and control.
This meticulous approach to data and process has fueled Hank Sauce’s growth from a local favorite to a nationwide success.
In this episode, you’ll learn:
How Hank Sauce scaled its distribution to national retailers The importance of traceability in ensuring food safety and product quality Strategies for building networks to expand brand reach
Jump into the conversation:
(00:00) Introducing Next Level Supply Chain
(01:34) The Hank Sauce story
(06:38) Grassroots marketing and early sales strategies
(10:09) Scaling up distribution to large retailers
(13:22) The importance of traceability and food safety
(16:11) Building a brand with a limited marketing budget
(19:21) Advice for new entrepreneurs
(26:30) Matt Pittaluga’s favorite tech
Connect with GS1 US:
Our website - www.gs1us.org
Connect with the guest:
Matt Pittaluga on LinkedIn
The Decentralized Identity Foundation (DIF) is thrilled to invite organizations, developers, and community members to participate in an exciting event centered around fostering interoperability within the DIDComm ecosystem.
What is DIDComm?DIDComm (Decentralized Identifier Communication) is an open standard designed for secure, private peer-to-peer communication, powered by Decentralized Identifiers (DIDs). Unlike traditional communication protocols that depend on centralized entities, DIDComm offers a self-sovereign approach that supports verifiable identities and ensures the highest level of privacy and security.
With the growth of DIDComm implementations across the ecosystem, this event presents an opportunity to come together, test integrations, and enhance interoperability in a collaborative environment.
Event DetailsDate: December 12, 2024
Time: 7:00 am PT / 3:00 pm GMT
Format: Each testing session will be 30 minutes long. Participants can choose to test with as many or as few others as desired.
This Interop-a-thon will focus on testing DIDComm V2 features. The goal is to facilitate scenarios such as sending a message from one app to another, with the receiving app decrypting and understanding the content. While different agents may have different functional focuses (e.g., mediators understanding forward messages but not needing to send them), participation helps ensure comprehensive compatibility across various implementations.
Preparation and ResourcesTo make the most of your experience at the Interop-a-thon, we encourage participants to prepare in advance by using the reference implementation for preliminary testing. The DIDComm Demo supports all core features relevant to the event, with the exception of Out of Band messages. For those needing to create out-of-band invitations, a helpful webpage can be found here to generate invitation URLs and QR codes for testing purposes.
If you have any questions or need assistance before the event, the DIDComm User’s Group is an excellent resource, offering support and guidance as you get ready for a successful event.
Volunteer OpportunitiesWe’re looking for room facilitators to help coordinate and ensure smooth sessions. If you’re interested, please sign up here.
How to JoinComplete this form and submit it by close of business on December 9, 2024, to confirm your participation.
Whether you’re developing, deploying, or experimenting with DIDComm, this event is an invaluable opportunity to engage with peers and push the boundaries of decentralized communication.
Let’s advance interoperability, together!
I thought it might be a good idea to try to describe the Tao, or the natural way, of WAO.
“In the Tao Te Ching, Laozi explains that the Tao is not a name for a thing, but the underlying natural order of the universe whose ultimate essence is difficult to circumscribe because it is non-conceptual yet evident in one’s being of aliveness.” (Wikipedia)
Doug and I, both founding members of WAO, have a podcast we called the Tao of WAO. The podcast is “A podcast about the intersection of technology, society, and internet culture — with a dash of philosophy and art for good measure.” We’re on hiatus right now, but there are 9 seasons you can enjoy!
WAO is a collective of individuals who broadly agree on many things. Like any group of people, there are nuanced differences in our positions on the issues of the day, so instead of corporate pronouncements, we write things like this Tao or stuff we’ve written on our spirit wiki page.
Using the taxonomy of our podcast tagline, I’ve developed 15 short statements that summarise some of our beliefs:
Technology in the Tao of WAO cc-by-nd Visual Thinkery for WAO Open source is preferred, practicality is required.We advise our clients and friends on all kinds of technical matters. We have varying beliefs on when to use what kinds of platforms and this statement sums up our approach. We try to find Open Source solutions, but we know that there are a variety of factors — like staff technical skills, developer resources or feature requirements — that need to be taken into account. Sometimes the practical recommendation is the right one.
Open standards lead to a better world.We are proponents, however, of open source technology and we understand how important it is that our technical infrastructure remains open. There are open standards that underpin much of what we use on a regular basis. The World Wide Web is built on an open standard and so is e-mail. If we want a better world, we need interoperability and cohesion within the tech landscape.
The wild and open Internet is the platform.It doesn’t make sense to use the same structures, policies and platforms for different people, communities and organisations. There is nuance in every project, so we don’t limit ourselves by recommending the same tools all the time. Instead, we are constantly learning new ways to use the Internet to solve real-world problems. We start with the communities involved, then we figure out how to use the Internet to solve the problems the people within those communities are trying to solve.
Society cc-by-nd Visual Thinkery for WAO Worker owned cooperatives work.We’re pretty invested in the idea of cooperation. It’s worth noting that cooperatives are not a small ‘subculture’ of the business world. The world’s top 300 cooperative businesses have a turnover of over two trillion dollars a year. 10% of all jobs on the planet are co-op jobs. This economic system is hiding in plain sight and the powers that be don’t want us to know that there is a different way.
Participation matters, so model the behaviours.Cynics would say that in the grand scheme of things it doesn’t matter what a single person does. We would make a quip about a mosquito having big impact or steal a quote from Margaret Mead (“Never doubt that a small group of thoughtful, committed citizens can change the world; indeed, it’s the only thing that ever has.”) We all have influence and impact on other people, so model the behaviours you want to see in the world and know that your participation matters.
Communities move mountains.We could steal that Margaret Mead quote a second time, actually. Or, we can simply underline our belief that it is the collective, made up of individuals, that makes the world a better place. There is a more prevalent narrative embedded in (especially) western societies, and it’s one we need to actively disagree with. When we form communities, big or small, we can create more opportunities for everyone.
Internet Culture cc-by-nd Bryan Mathers Digital literacies live on a never-ending continuum.Dr Belshaw wrote his doctoral thesis on the 8 Essential Elements of Digital Literacies. In it, he writes about ambiguity, defining terms, problems with binary concepts and so, so much more. We at the co-op are keen to help others understand that there is no such thing as being “literate”, digitally or otherwise. Instead, we appreciate acceptance and enthusiasm around the idea that humans are always learning and there is always more to learn.
Structured participation leads to more participants.Structure helps people find their way into a group. We live in a world of diversity, and yet we tend to hear only the loudest voices. Finding intentional ways to invite participation from more diverse groups of people helps build healthy communities.
Experiment and fail often and loudly.We are a group of people that likes to try new things. We try to encourage one another to do so, both personally and professionally. We try to be reflective and vocal about our failures, since failing is such a great way to learn.
Philosophy cc-by-nd Visual Thinkery for WAO Learning happens everywhere.Our brains are constantly changing their very structures. Every day new inputs and impulses work to change the neural pathways that run our whole bodies. We are always learning, even if we’re not trying to. This learning happens at work, in classrooms, at the neighbour’s house, and all forms of learning deserve to be recognised.
Consent-based governance is empowering.At the co-op we solve prickly problems with a sociocratic approach to decision making. We discuss a proposal and rework it until all members can consent to a decision being made. Governing our cooperative this way means that every member has an invitation to unpick how they’re feeling about something so that everyone can be comfortable in our collective efforts.
We bring our full selves to work.We have, together, taken courses on conflict resolution and alternative governance structures so that we can bring our full selves to work. We talk to each other about things that are going on inside our minds and do our best to figure out the difference between a perception and an intention. We are human, it’s ok to feel, talking about that helps us work better together.
Art Bad Poetry for the Keep Badges Weird community (now called Open Recognition is for Everybody) Remix is a compliment.We’re big believers in open licensing, partially because we love to expand and remix ideas, graphics and artwork. Open licensing gives us, as creative folks, the ability to make a thing and credit where the original idea came from. Because Doug published his AI generated photographs “Time’s Solitary Dance” under an open license, I could take them and write a kind of story to them without asking. It became a collaborative art piece.
Bad poetry is just poetry that is bad, and that’s ok.Bad poetry written by humans is ok, bad poetry written by AI is just bad. We’re honest enough with ourselves to admit when something we’ve done is kind of terrible, like these last few entries into the Tao of WAO.
The Tao of WAO isn’t a thing, it is ephemeral.If you ask us in a year what we think of this post, we’ll probably say “Heh? What Tao of WAO post, that’s a podcast!” Then you’ll bring this up and it will have my name on it and I’ll feel embarrassed. Looking forward to it!
What’s your Tao?Need help figuring it out? We need more weird and wonderful projects, so get in touch!
The Tao of WAO was originally published in We Are Open Co-op on Medium, where people are continuing the conversation by highlighting and responding to this story.
Webinar
This session tackled the coming challenges higher education institutions face in achieving compliance with the new ADA Title II standards. Firm deadlines are looming for institutions of all sizes to comply with new rules mandating that all digital content and mobile apps meet WCAG 2.1, Level AA standards. Join Edge’s accessibility experts as we explore the complexities of implementing these changes within strict timelines, the potential obstacles for institutions attempting to navigate these requirements, and how Edge’s expertise and guidance can streamline the path to compliance. You’ll gain valuable insights on preparing your institution’s digital accessibility strategy and understanding the key elements of this regulation.
This webinar will:
Outline new ADA Title II requirements and their impact on compliance and institutional operations. Provide insights on addressing accessibility standards sustainable leading up to and beyond the initial deadline. Identify the challenges and potential roadblocks in achieving accessibility compliance in time. Explore how partnering with Edge as a nonprofit consortium can provide the support and resources needed to facilitate compliance. Complete the Form Below to Access Webinar Recording [contact-form-7]The post Interpreting ADA Title II Compliance: Implications for Institutions and How You can Prepare appeared first on NJEdge Inc.
The post Leveraging National Supercomputing Resources for Research and Education appeared first on NJEdge Inc.
Webinar
You are invited to a virtual event.
Presented by: Lukman Ramsey, Ph.D., Head of AI Solutions, Public Sector and Education
This presentation explored the transformative role of artificial intelligence in the educational sector. Drawing on Dr. Ramsey’s extensive experience, he outlines the key challenges currently being addressed by AI technologies and delved into specific applications, particularly in AI-driven tutoring and personalized learning experiences. The discussion highlighted innovative solutions developed by Google, demonstrating recent advancements and envisioning the future trajectory of AI in education.
Please contact Forough Ghahramani (research@njedge.net) for additional information.
We are grateful for support from the National Science Foundation.
CC* Regional Networking: Connectivity through Regional Infrastructure for Scientific Partnerships, Innovation, and Education (CRISPIE) project (NSF OAC- NSF0311528)
Complete the Form Below to Access Webinar Recording [contact-form-7]The post The Impact of AI on Education appeared first on NJEdge Inc.
The Wireless Broadband Alliance (WBA), the global industry body dedicated to improving Wi-Fi standards and services, announced a new framework for WBA integrating OpenRoaming and FIDO Device Onboard (FDO). This initiative is intended to enable a seamless and secure zero-touch onboarding process for Internet of Things (IoT) Wi-Fi devices.
It’s been a couple of years since Apple, Google, and Microsoft started trying to kill the password, and its demise seems more likely than ever.
The FIDO Alliance, the industry group spearheading the passkey push, is putting out some much-needed guidelines to make passkeys usage feel more consistent from one site to the next, and the big tech platforms are getting better at letting you store passkeys in your preferred password manager. Work is also underway on a protocol to let people securely switch between password managers and take all their passkeys with them.
All this is contributing to an air of inevitability for passkeys, especially as major e-commerce players such as Amazon and Shopify get on board. Even if you’re not fully attuned to the passkey movement, you’ll soon have to go out of your way to avoid it.
“Within the next three to five years, virtually every major service will offer consumers a passwordless option,” says Andrew Shikiar, the FIDO Alliance’s CEO and executive director.
Join our report launch!
The post Dec 3 – Join our online event: Alternative social media platforms for social justice organizations appeared first on The Engine Room.
This is important. Be there.
If we want VRM to prove out globally, we have to start locally. That’s what’s happening right now in India, using ONDC (the Open Network for Digital Commerce), which runs on the Beckn protocol.
ONDC is a happening thing:
One big (and essential) goal for VRM is individual customer scale across many vendors. ONDC and Beckn are for exactly that. Here is how kaustubh yerkade explains it in Understanding Beckn Protocol: Revolutionizing Open Networks in E-commerce:
Beckn protocol in the Real World
The Beckn Protocol is part of a larger movement toward creating open digital ecosystems, particularly in India. For example, the ONDC (Open Network for Digital Commerce) initiative in India is built using the Beckn protocol, aiming to democratize e-commerce and bring small retailers into the digital economy. The Indian government supports ONDC for making digital commerce more accessible and competitive.Here are some practical examples of how the Beckn Protocol can be used in different industries:
1. Ride-Hailing and Mobility Services
Example: Imagine a city with multiple ride-hailing services (e.g., Uber, Ola, Rapido). Instead of using individual apps for each service, a user can use one app powered by the Beckn Protocol. This app aggregates all available ride-hailing services, showing nearby cars, prices, and estimated arrival times from multiple providers. The user can choose the best option, book the ride, and pay directly through the unified app.Benefit: Service providers gain broader visibility, and users can easily compare services in one place without switching between apps.
https://becknprotocol.io/imagining-mobility-with-beckn/
2. Food Delivery Services
Example: A consumer uses a food delivery app that leverages Beckn to show restaurants from multiple food delivery services (like Zomato, Swiggy, and local food delivery providers). Instead of sticking to just one platform, the user sees menus from different services and can order based on price, availability, or delivery time.Benefit: Restaurants get listed on more platforms, increasing their exposure, and users can find more options without hopping between different apps.
3. E-Commerce and Local Retail
Example: A shopper is looking for a product (like a phone charger) and uses an app built on the Beckn Protocol. The app aggregates inventory from big e-commerce players (like Amazon or Flipkart) as well as small local retailers. The user can compare prices and delivery times from both big platforms and nearby local stores, then make a purchase from the most convenient provider.Benefit: Small businesses and local stores can compete with larger e-commerce platforms and reach a wider audience without needing their own app or website.
4. Healthcare Services
Example: A patient needs to book a doctor’s appointment but doesn’t want to manually search through different healthcare platforms. A healthcare app using Beckn shows available doctors and clinics across multiple platforms (like Practo, 1mg, or even independent clinics). The patient can choose a doctor based on location, specialization, and availability, all in one place.Benefit: Patients get access to a larger pool of healthcare providers, and doctors can offer their services on multiple platforms through a single integration.
5. Logistics and Courier Services
Example: An online seller wants to ship products to customers but doesn’t want to manage multiple courier services. With an app built on Beckn, they can see delivery options from multiple logistics providers (like FedEx, Blue Dart, and local couriers) and choose the best one based on cost, speed, or reliability.Benefit: Businesses can streamline shipping operations by comparing various logistics providers through one interface, optimizing for cost and delivery time.
6. Public Transportation
Example: A commuter is planning a trip using public transit in a city. Using a Beckn-powered app, they can view transportation options from multiple transit services (like metro, bus, bike-sharing services, or even ride-hailing). The app provides real-time schedules, available options, and payment methods across different transport networks.Benefit: The commuter has a unified experience with multiple transportation modes, improving convenience and access to more options.
7. Local Services (Home Services, Repair, Cleaning)
Example: A user needs a home repair service (e.g., a plumber or electrician). Instead of browsing different service provider platforms (like UrbanClap or Housejoy), a Beckn-enabled app aggregates professionals from multiple service providers. The user can compare prices, reviews, and availability and book a service directly from the app.Benefit: Service providers get access to more customers, and consumers can quickly find professionals based on location, reviews, and price.
8. Travel and Hospitality
Example: A traveler uses a travel booking app based on Beckn to find accommodations. The app aggregates listings from various hotel chains, Airbnb, and local guesthouses. The traveler can filter by price, location, and amenities, then book the best option without switching between platforms.Benefit: Smaller accommodation providers can compete with big brands, and travelers get access to more choices across different platforms in one app.
9. Government Services and Civic Engagement
Example: A citizen uses a Beckn-enabled app to access multiple government services. They can apply for a driver’s license, pay taxes, and book a health checkup at a government hospital—all from one platform that integrates services from different government departments and third-party providers.Benefit: Governments can offer a unified experience across various services, and citizens get easier access to public services without visiting multiple websites or offices.
He adds,
The ONDC (Open Network for Digital Commerce) initiative in India is built using the Beckn protocol, aiming to democratize e-commerce and bring small retailers into the digital economy. The Indian government supports ONDC for making digital commerce more accessible and competitive.
While it is nice to have government support, anyone anywhere can deploy open and decentralized tech, or integrate it into their apps and services.
On Tuesday we’ll have a chance to talk about all this at our latest salon at Indiana University and live on Zoom. Our speaker, Shwetha Rao, will be here in person, which always makes for a good event—even for those zooming in.
So please be there. As a salon, it will be short on lecture and long on dialog, so bring your questions. The Zoom link is here.
Experts discovered the top 10 overused passwords in the US that could put you at risk of being easily hacked.
NordPass and NordSteller recently released its sixth annual analysis of personal password habits.
Based on NordPass and NordStellar’s data they crunched, ‘secret’ was the most common password in the US.
The management platforms found that the password was used 328,831 times, and it would take less than one second for someone to crack it.
‘Secret’ is also ranked in the top 10 most common passwords in the world.
Andrew Shikiar, executive director of FIDO Alliance, mentioned hackers could guess the password if it’s even spelled using numbers or with other substitutions while speaking with CNBC.
‘For example, they might believe that “secret” is a weak password but “s3cr3t” will be hard to guess,’ Shikiar said in 2019.
In the business world, data consistency is essential, but so is data privacy and control over who has access to it. Within a permissioned private network, where multiple entities are involved, there may be situations where access to transaction details between specific participants needs to be restricted for various reasons. This highlights the importance of privacy, even among entities within the same private environment. It’s not difficult to envision a scenario where this occurs—for instance, in a setting where transfers of tokens representing financial assets take place between entities.
Corporate Overview
Branch® is a cloud-native home and auto insurance company founded in 2020. Operating on a serverless architecture, Branch’s mission is to simplify the insurance purchasing experience for consumers and independent insurance agents.
Branch Authentication Challenges“One of our key superpowers is making the insurance buying experience as easy as possible,” explained Arkadiy Goykhberg, Chief Information Security Officer at Branch.
Due to the sensitive nature of their market and the variety of stakeholders they served, Branch faced multiple authentication challenges:
Legacy two-factor authentication. Branch has been relying on SMS-based two-factor authentication, which has multiple issues. Telco issues would prevent users from logging in. It’s also not phishing resistant and subject to risk associated with SIM swapping attacks. Customer support volume. There was a high volume of support tickets related to password resets and login issues. User-friendly approach. Branch needed a more secure and user-friendly authentication process to serve their 12,000+ independent insurance agents. Compliance. Another core challenge was the need to meet strict compliance requirements in the highly regulated insurance industry. How Passkeys Addressed Branch’s ChallengesBranch identified passkeys as the solution to their authentication problems for several reasons.
Enhanced Security: Passkeys are inherently phishing-resistant, addressing the vulnerabilities associated with SMS-based authentication.
Improved User Experience: Passkeys eliminate the need for passwords, reducing friction during login and preventing issues related to forgotten passwords or typing errors.
Reduced Support Burden: By implementing passkeys, Branch saw a significant reduction in support tickets. John MaGee, Software Product Manager at Branch, noted, “We did see our support ticket volume drop by about half, which was the key business goal, outside of some of the user experience and security goals of the project.”
Regulatory Compliance: Passkeys provided a strong foundation for meeting current and future regulatory requirements in the insurance industry.
Compatibility with Existing Infrastructure: Passkeys integrated well with Branch’s cloud-native architecture, allowing for a smoother implementation process.
Implementation process and results Branch adopted a phased approach to implementing passkeys.The first phase involved internal testing. Branch first implemented passkeys for internal use, which helped build confidence and user acceptance. Branch then went through a vendor selection and development phase, contracting with Descope. Branch decided that it was a more efficient approach to engage with a service provider to help with passkey implementation.
The project roadmap included a two month vendor selection process, followed by a three-month development phase and a six-week end-user migration phase.
The final step was a phased user migration. Branch rolled out passkeys to its agents in waves, starting with a small group and gradually scaling up. The onboarding process involved multiple communication campaigns to prepare users for the new authentication experience. The user journey included prompting users to set up passkeys and providing a fallback option of email and OTP. The goal was to ensure a seamless transition and reduce support ticket volume by eliminating password resets. This approach allowed the company to refine the process based on feedback and minimize risks.
The results of the passkey implementation were impressive:
25% passkey adoption rate across the organization, exceeding internal goals. 50% reduction in support ticket volume related to authentication issues. Maintained steady login failure rates at 5%, despite the transition. Improved user experience, with fewer frustrations related to authentication.One surprising benefit was the high compatibility of passkeys with existing hardware and software. Goykhberg said that he had initially expected that only approximately 60% of systems would support passkeys.
“That hypothesis was wrong. To my surprise, only a few devices across thousands of logins could not support passkeys,” he said.
Branch’s passkey success and future roadmapBranch’s successful implementation of passkeys has not only addressed their current authentication challenges but also laid the groundwork for future improvements and expansions.
Goykhberg said:
“Descope’s flexible workflow made implementing passkeys and taking care of edge cases relatively straightforward. With conditional steps, we routed users to passkeys when their hardware or software were compatible, and routed them to fallback MFA options when passkeys couldn’t be supported. Visualizing the user journey as a workflow helps us audit and modify the registration
and authentication journey without making significant code changes, which sets us up well for the future.”
The company’s successful phased rollout approach, starting with internal adoption and then gradually expanding to their agent base, highlights the importance of incremental implementation and learning. This strategy will continue to inform their future authentication initiatives. Building on the initial success of 25% passkey adoption, Branch aims to increase this number through targeted experimentation and user education.
Branch’s successful implementation of passkeys demonstrates how this modern authentication method can significantly improve both security and user experience in the insurance industry. By addressing the vulnerabilities of traditional authentication methods,
reducing support burden and providing a seamless user experience, passkeys have proven to be a valuable solution for Branch’s authentication needs.
Zürich, Switzerland — November 14, 2024
Trace Labs, the core builders behind the OriginTrail ecosystem, is pleased to announce the expansion of its advisory board with the addition of Toni Piëch and Chris Rynning. Both esteemed leaders bring extensive experience in fostering human-centric technology, investment, and innovation, further positioning Trace Labs to drive trusted advancements in Artificial Intelligence (AI) and sustainable digital solutions across multiple sectors, including healthcare, construction, and mobility.
The OriginTrail ecosystem, built on decentralized knowledge graph technology, is committed to leveraging AI in a responsible and sustainable manner. By joining the advisory board, Toni and Chris will help shape Trace Labs’ vision for harnessing AI to positively impact industries while advocating for ethical, human-centered applications of technology.
Toni PiëchToni Piëch, a serial entrepreneur and 4th generation member of the Piëch-Porsche family, brings a unique blend of global experience and vision for developing a trusted technology ecosystem. Currently based in Luzern, Switzerland, Toni’s contributions to technology and sustainability are reflected both through the Anton Piëch Foundation (https://www.tonipiechfoundation.org/) and his broad technology investment activities, investing both in venture capital funds and directly in people and companies. A graduate of Princeton University with a background in East Asian Studies, Toni spent twelve years in China before returning to Europe to further his philanthropic and investments efforts that can make significant contributions to a better and safer world.
Chris RynningChris Rynning, an economist and investment professional, brings decades of expertise in venture capital and global markets. A resident of Zurich, Switzerland, Chris is a seasoned investor with a background in mergers & acquisitions, public/private market investing, and is currently the managing partner of the Piëch-Porsche family office AMYP Ventures. A graduate of ESSEC in Paris, Chris also holds an MBA in Finance and Economics from the University of Chicago. His influence spans across Asia, US, and Europe, where he has lived and served as an investor and advisor to scale-up companies, while maintaining a thought leadership role in AI, cryptocurrencies, and blockchain. Chris also authored a book on the topic in 2018.
Toni and Chris join a prestigious advisory board that includes Dr. Bob Metcalfe, Ethernet inventor, and Turing Award winner; Greg Kidd, founder of Hard Yaka; and Ken Lyon, global logistics expert. Together, this board will support Trace Labs’ mission of pioneering decentralized solutions that power trust and transparency.
For further information, please contact:
lucija.naranda@tracelabs.io
Trace Labs, Core Developers of OriginTrail, Welcomes Toni Piëch and Chris Rynning to the Advisory… was originally published in OriginTrail on Medium, where people are continuing the conversation by highlighting and responding to this story.
You might have noticed that many online services are now offering the option of using passkeys, a digital authentication method touted as an easier and more secure way to log in.
Some 20% of the world’s top 100 websites now accept passkeys, said Andrew Shikiar, CEO of the FIDO Alliance, an industry group that developed the core authentication technology behind passkeys.
Passkeys first came to the public’s attention when Apple added the technology to iOS in 2022. They got more traction after Google started using them in 2023. Now, many other companies including PayPal, Amazon, Microsoft and eBay work with passkeys. There’s a list on the FIDO Alliance website.
Still, some popular sites like Facebook and Netflix haven’t started using them yet.
Passkey technology is still in the “early adoption” phase but “it’s just a matter of time for more and more sites to start offering this,” Shikiar said.
We are pleased to announce that Particle Network has added BTC Connect to the Elastos ecosystem as part of our ongoing collaboration. This development enhances Elastos relationship with Bitcoin, offering users a slicker Bitcoin experience.
What This Means Single Wallet Use: Connect your Bitcoin wallet—like UniSat, OKX, or BitGet—directly to Elastos. This allows you to interact with both the Bitcoin network and Elastos’ smart contracts using the same wallet. Smart Account Assignment: When you connect, BTC Connect assigns a smart account to your Bitcoin wallet. Your Bitcoin wallet acts as the signer, so you don’t need separate accounts. Easy Transactions: Send and receive transactions on both Bitcoin and EVM-compatible chains without hassle. Your Bitcoin wallet’s signatures are adapted for use on these chains. Looking Ahead with BeL2BTC Connect sets the stage for supportive BeL2 dApp possibilities:
DeFi Access: Soon, you might use your Bitcoin wallet to secure BTC, generate proofs for smart contracts, and obtain USD loans or issue stablecoins all in one wallet. These balances would appear in your BTC Connect account, usable across various chains. Unified Experience: BeL2 DApps could let you handle complex DeFi tasks directly from your Bitcoin wallet, making it feel like you’re still on the Bitcoin network but with added features from Elastos’ SmartWeb. Cross-Chain Use: Use stablecoins or USD loans across different EVM chains without needing traditional intermediaries or bridges.Experience this advancement by checking out the BTC Connect demo: BTC Connect Demo.
Our collaboration with Particle Network brings BTC Connect into Elastos, giving users a unified and secure blockchain environment. We’re excited about the future possibilities this opens up for users and developers alike. Did you enjoy this article? Follow Infinity for the latest updates here!
Progressive Trust—it sounds a bit like something from a relationship advice column, right? But in the world of digital interactions, it’s actually a revolutionary model, one that moves us away from “all-or-nothing” choices into a more human, flexible way of establishing trust. Progressive Trust is about mirroring the natural ways we build trust in real life, adding depth and resilience to our digital interactions.
“The basic idea behind progressive trust is to model how trust works in the real world”
—Christopher Allen, Musings of a Trust Architect: Progressive Trust (December 2022)
In real life, trust doesn’t happen at the click of a button. It’s a process. You don’t start a friendship, a business deal, or a marriage with complete openness or blind trust. Instead, what you reveal is initially minimized, and then trust builds up gradually. As we share experiences, we reveal more, bit by bit, learning through consistent responses from the other person. When it comes to digital relationships, whether they’re between people, devices, or other entities, why should things be any different?
Why Progressive Trust Matters TodayThe internet didn’t start off so polarized. Back in the early days, you could slowly get to know people online, like on message boards or MUDs, where interaction was incremental and organic. But as commercialization took over, new online communities popped up with restricted, binary models of trust. Tech giants started telling us who to trust based on certificates or institutional endorsements, pushing people into a “trust or don’t trust” mindset. But this one-size-fits-all approach isn’t just impersonal. It’s risky. Without the gray area, we’re left with blind trust or total skepticism, with few options inbetween.
Enter Progressive Trust, which seeks to change that by returning choice to the user, letting individuals decide whom to trust and how much of themselves to reveal over time. It’s an effective way to enhance security and protect user agency, fitting seamlessly into decentralized systems like blockchain, where openness and security go hand in hand. Progressive Trust takes the online world back to a more natural process of gradual trust-building, transforming digital trust from a binary affair into something more organic.
The Progressive Trust Life CycleLet’s break down the Progressive Trust Life Cycle into its key phases, each step building on the last and adding layers of trust over time. Think of it as a journey from cautious introduction to informed engagement, with each phase providing the groundwork for a stronger, more resilient trust model. These are the steps of progressive trust that are simultaneously automatic in the real world and often ignored in the digital world.
0. Context – Interaction ConsideredThe foundation of Progressive Trust begins with understanding the Context of an interaction. This sets the stage by establishing the purpose and parameters of the interaction, helping each party assess risk and feasibility. Before any data is exchanged or any commitments are made, each party considers the interaction’s purpose, its goals, its potential benefits, and the risks involved. They also examine the setting in which the interaction takes place, ensuring that they understand the overall environment and any particular conditions that might impact their decision.
Example: A homeowner, Hank, evaluates hiring a contractor for a kitchen remodel. He considers the financial costs, the importance of quality work, and the potential risks of inviting someone into his home for an extended period. The stakes of the scenario are sufficient to prompt Hank to engage in a Progressive Trust model, as opposed to a quick, one-off transaction.
This initial phase helps each side assess whether the potential stakes, such as financial or reputational risk, warrant a full, Progressive Trust approach or if simpler, lower-risk models could suffice.
1. Introduction – Assertions DeclaredWith the interaction context defined, both parties proceed with Introduction, where they each make initial declarations and claims. By sharing basic information, the parties set the groundwork for further scrutiny, while keeping sensitive details private or hidden (for now).
Example: Hank meets Carla, the cabinet maker, at a social gathering and discusses his interest in remodeling his kitchen. Carla offers her business card and highlights her experience, expressing interest in working with him. This initial interaction is informal yet purposeful, establishing the first connection and introducing each party’s intentions.
This phase is an essential starting point for trust-building, as it allows each party to signal their intentions clearly and publicly, establishing a mutual understanding of what they aim to accomplish. It does not involve extensive trust verification but instead creates a framework of transparency and expectation between the participants.
2. Wholeness – Integrity AssessedOnce an introduction has been established, both parties assess the Wholeness of the information shared. This phase involves evaluating the structural integrity of the data, ensuring that all critical pieces are complete and correctly formatted. Think of this as a quality check: verifying that foundational information is present, well-formed, and free from any immediate signs of corruption or tampering.
Example: Hank checks Carla’s business card, noting that it includes her contractor license number and contact details. Carla, meanwhile, considers whether Hank’s job aligns with her skillset. Both use this phase to make sure the information they have about each other is coherent and free of red flags.
This phase creates the foundation for deeper verification by ensuring that each party’s data contributions are reliable at a surface level. Without verifying structural integrity, any future steps could rest on flawed or incomplete data, leading to potential misunderstandings or risks.
3. Proofs – Secrets VerifiedWith data integrity confirmed, the next step is Proofs, where parties delve into verifying the sources of the data. It’s a deeper level of validation, establishing the authenticity of the sources for each party’s assertions. That validation leverages modern technology such as digital signatures where possible, to minimize the risk of misrepresentation or fraud.
Example: Hank calls a few of Carla’s previous clients, confirming that they exist, and the testimonials given to him are real. Similarly, Carla may ask for proof of Hank’s readiness to pay by confirming his budget or financial standing.
This phase confirms that both parties’ assertions are backed by a proof, to establish a more secure foundation for the interaction.
4. References – Trust AggregatedBuilding on the established proofs, the References phase broadens trust by gathering endorsements, certificates, or additional validation from external sources. This step goes beyond just authenticating the source of any assertions. It’s about gathering the good word from others, including testimonials, reviews, licenses, or certificates. Cryptographic methods may also be used to assure the validity of the references. Parties don’t necessarily gather every reference: they collect until they feel they have enough corroborating information to proceed.
Example: Hank checks Carla’s contractor license in a state registry and reads online reviews. Carla, in turn, verifies Hank’s reputation or credibility within her professional network, gaining confidence that his project is legitimate and that he can be trusted to honor financial commitments.
This phase provides a composite view of the other party’s trustworthiness based on diverse sources, making trust more holistic. It creates a comprehensive picture without oversimplifying the credibility of each party into a binary “yes” or “no.”
5. Requirements – Community ComplianceAfter personal and third-party validation, the parties consider whether the interaction meets broader Community Standards and Requirements. Here, each party assesses if the interaction complies with external guidelines, legal standards, or industry norms, which may vary by context. Compliance might involve revealing additional data, following guidelines for quality or safety, or meeting regulatory requirements, which helps each party feel confident that their involvement is appropriate and sanctioned.
Example: Hank ensures that Carla’s contractor license and project quote meet legal requirements and industry standards, such as fair pricing and warranty expectations. Carla might consult her network or a local building authority to verify that Hank’s project is feasible and professionally compliant.
This phase adds another layer of credibility through its confirmation that the interaction aligns with expected practices and requirements.
6. Approval – Risk CalculatedWith community compliance confirmed, each party calculates the risk of proceeding and provides a tentative Approval. This step involves a personal assessment, comparing the accumulated trust to any potential risks or liabilities. It’s a decision point where each party considers their own risk model and goals, assessing whether the interaction is likely to fulfill their needs without exposing them to undue harm. Approval may involve internal checks or may require formal documentation of agreed-upon terms.
Example: Hank and Carla both review the project’s terms and risk factors, ensuring they feel comfortable with potential liabilities. When ready, they formalize their commitment by signing a contract, each confident that the project aligns with their risk model and is mutually beneficial.
This phase emphasizes that trust isn’t an all-or-nothing concept. It exists on a spectrum, and each party must decide if their level of trust is sufficient to continue.
7. Agreement – Threshold Endorsed (Optional)In situations of higher stakes or complexity, the Agreement phase may require additional endorsements before proceeding. An Agreement phase is optional but valuable when external input can add layers of confidence, often through the endorsement of peers, family members, or other trusted figures. Threshold endorsements are vital in larger or more sensitive projects, ensuring that all necessary parties or authorities approve before moving forward.
Example: Hank might discuss the project with his family for added assurance, while Carla secures necessary permits from the city. Both parties use these endorsements to reinforce
This phase provides an extra level of validation, helping each party feel more secure in their decision to proceed.
8. Fulfillment – Interaction FinalizedFulfillment is the phase where each party finally executes their commitments, bringing the project to life based on the trust established through previous steps. Fulfillment requires each party to act according to the rules they’ve set, adhering to any terms, standards, or expectations agreed upon earlier.
Example: Carla completes the kitchen remodel, delivering quality work as per the contract. Hank, in turn, fulfills his financial commitment by making the payment. The project reaches its conclusion, satisfying both parties’ expectations based on their prior trust-building efforts.
This phase represents the culmination of the trust-building process, where both sides honor their agreements and responsibilities. It’s a phase of action rather than evaluation, marking a key transition from planning to execution, after which the interaction is officially complete.
9. Escalation – Independently Inspected (Optional)In high-stakes or sensitive interactions, the Escalation phase optionally introduces an independent, third-party inspection. This step allows an impartial reviewer to verify that each party’s work or commitments were met, ensuring that the final product aligns with the agreed-upon standards. An inspector may re-evaluate certain phases, especially compliance and fulfillment, confirming that all requirements were followed.
Example: A city inspector reviews Carla’s remodel to ensure it complies with local building codes, giving Hank and Carla final confirmation that the project meets regulatory standards.
This phase helps protect each party, providing an additional level of assurance when risk is high or when the interaction has lasting implications.
10. Dispute – Independently Arbitrated (Optional)If issues arise, the final, optional phase of Dispute involves resolving conflicts through independent arbitration. In cases where fulfillment does not meet expectations, each party may bring forth additional data or reveal previously concealed information to support their case. An arbitrator then assesses the evidence, reviewing both parties’ original commitments, agreements, and standards, to determine a fair resolution.
Example: If a cabinet installed by Carla collapses, Hank may initiate a dispute to assess liability. An independent arbitrator reviews the contract, Carla’s compliance with installation standards, and any relevant inspection reports, ultimately deciding if Carla is responsible for repairs or damages.
This phase safeguards both parties, providing a structured way to resolve disagreements that may impact future interactions or reputations.
The Progressive Trust Life CycleInteractions are actually mirrored by both parties, but this diagram simplifies things in most places by focusing on party two.
Beyond Binary Trust: How Progressive Trust Can Transform the InternetProgressive Trust offers a way to return agency to individuals in a world increasingly dominated by centralized systems. Instead of clicking “OK” on trust agreements handed down by big corporations, users regain control over who they trust and to what degree, over time. Imagine a digital ecosystem where browsers, websites, or social media platforms gradually allowed users to choose what information they revealed and what they kept private, based on their own evolving trust models.
From Gradients to GreatnessThe vision for Progressive Trust goes beyond making interactions safer; it’s about bringing digital trust closer to real-world norms. With Progressive Trust, we’re not just building secure systems—we’re creating environments where people can interact meaningfully and sustainably, with digital relationships that grow stronger over time, just like in real life. Whether it’s in journalism, finance, wellness, or personal data sharing, the possibilities are endless when trust is no longer binary.
Progressive Trust is hard, but it’s worth it. It’s a mature model, one that can elevate our digital interactions by letting trust grow naturally. We’ve evolved this process over thousands of years in the physical world; now it’s time to bring the same wisdom to the online world. By embracing Progressive Trust, we’re not just keeping data safe; we’re building a digital space where people can authentically connect and collaborate, one step at a time.
For a more extensive discussion of this Life Cycle, including a look at the vocabulary and several more examples in different domains, see “The Progressive Trust Life Cycle” on the Developer web pages. For more on progressive trust, see my 2004 introduction of the concept and my more recent 2022 musings on the topic.
Mastercard is enabling faster and more convenient online transactions with its newest feature, Mastercard Click to Pay, launching in the Asia-pacific region.
The result is that consumers will be able to enjoy one-click checkout across devices, browsers and operating systems, without needing to input one-time passwords (OTPs).
The feature is enabled by the Mastercard Payment Passkey Service, which allows on-device biometric authentication through facial scans or fingerprints, the same way phones are unlocked.
The Cybersecurity and Infrastructure Security Agency’s (CISA) secure-by-design pledge has hit its six-month mark, and companies that took the pledge say they’ve made significant security improvements since they signed onto the initiative.
There appears to be growing momentum behind the use of passkeys as an alternative identity verification tool to passwords, with the familiarity with the technology growing over the past two years while the use of passwords as declined a bit, according to the Fast IDentity Online (FIDO) Alliance.
In its latest Online Authentication Barometer, FIDO found that support for a number of authentication options – including not just passkeys but also biometrics – is growing.
Public awareness of passkeys has jumped from 39% in 2022, when the technology was first introduced, to 57% this year. Meanwhile, the use of passwords in various services sectors is dropping. For example, the percentage of people who used a password over a two-month period for financial services dropped from 51% two years ago to 31% this year.
Retail lags in authentication modernization, but not because providers aren’t interested in upgrading. It’s because customers actively reject change. Familiarity, ease of implementation and legacy system compatibility all mean that very few retailers offer anything beyond usernames and passwords, not even two-factor (2FA) and multi-factor authentication (MFA).
Ecommerce sites have experimented with magic links, an authentication method that is a little higher friction but is still a viable passwordless alternative. Meanwhile, biometric authentication (think fingerprints and facial recognition) is gaining popularity among less technical users, even if it’s simply to unlock their smartphones. Passkeys, another passwordless authentication method, leverage biometrics or a PIN to let consumers confirm a purchase with just a tap or a quick selfie.
Corporate Overview
JCOM Co., Ltd. (J:COM) provides a wide range of services to 5.72 million households nationwide, including cable TV (specialty channels, BS, terrestrial digital), high-speed internet connection, smartphones, fixed-line phones, electricity, video entertainment, and home IoT.
Under the brand message “Making the new normal,” J:COM actively incorporates digital technology to offer new services that make customers’ lives more comfortable and enriched.
To ensure the safe and comfortable use of the various services provided by J:COM, customers need to register a J:COM Personal ID (phone number or email address), which is linked to multiple services and apps offered by the company. Since August 2019, J:COM has been considering a new J:COM Personal ID, aiming to follow the latest security measures while continuously and swiftly pursuing the convenience of easy ID registration and login, which are often contradictory goals.
Deployment of FIDO2Previously, in addition to ID/password authentication, J:COM adopted multi-factor authentication by sending one-time passwords to phone numbers.
However, aiming for further convenience, J:COM decided to introduce passwordless authentication using biometric authentication available on customers’ everyday devices (smartphones, tablets).
For the implementation, J:COM used the FIDO-compliant authentication platform “Uni-ID Libra” provided by NRI Secure Technologies, Ltd. (NRI Secure).
Initially, there were challenges in guiding users through the initial setup of FIDO authentication due to differences in operation depending on the OS and browser specifications used by the users, such as fingerprint and facial recognition. However, these issues were resolved by improving screen displays and support site descriptions.
Effects of ImplementationAs of August 29, 2024, the number of passkey (FIDO credentials) registrations has reached 16% of the total IDs, and the number of services that can use biometric authentication has reached 25. This implementation has not only improved convenience but also resulted in cost savings on SMS transmission fees, as the cost remained flat despite the increase in the number of users and authentications for the services provided by J:COM.
Passkey (FIDO credentials) registrations has reached 16% of the total IDsShiori Takagi from the Agile Development Department, IT Planning Promotion Division, Information Systems Department of JCOM Co., Ltd., commented on this case study:
Read the Case Study“With the introduction of FIDO authentication, we believe we have made significant progress towards our goal of enabling customers to log in and use services more securely and easily. We believe that registration will expand further and service usage will be promoted in the future.”
The inaugural Hiero community meeting marked a significant milestone for the Hiero project, emphasizing its global reach and the adaptive nature of open-source initiatives. Diane Mueller, Head of Open Source Development at Hedera, opened the meeting by providing context on Hiero’s journey and its recent contribution to the Linux Decentralized Trust Fund Foundation. Originally scheduled, Hendrik Ebbers, Chair of the Hiero Technical Steering Committee (TSC), was set to present, but he had to bow out due to illness. Stepping up on short notice was Richard Bair, Hashgraph’s Director of Engineering, another member of the TSC, showcasing the community’s ability to adjust seamlessly.
Update: added videos after they were made available via the ePIC 2024 conference proceedings page
CC BY-NC Visual Thinkery for WAOLast week, I was in Paris for ePIC, a conference I keynoted 12 years ago(!) in my first week on the Mozilla Foundation’s Open Badges team.
Four years later, in 2016, participants at the event signed the Bologna Open Recognition Declaration (BORD). This year, we signed the Paris Declaration on the Equality of Recognition.
Many thanks to the organisers of the event, who manage to ensure that it runs smoothly every year and invite a wide range of people to participate within the ‘big tent’ that is Open Recognition!
Open Recognition vs Microcredentials CC BY-NC Visual ThinkeryThere has been a lot of muddy thinking in the badges and credentials world since Mozilla handed over stewardship of the Open Badges standard. One of the major issues, which we discussed in an impromptu ‘group therapy session’ at ePIC, is the paucity of microcredentials as a term of art.
Microcredentialing, as we shall discuss in an upcoming NDLN Horizon Report, is a supply-side reinvention of Open Badges. Unfortunately, it has at its core neither an agreed-upon definition, nor a technical standard. As such, ‘microcredential’ is what I would call an unproductively ambiguous term; it doesn’t mean or signify much.
The reason that Open Recognition is an increasingly attractive approach to badging is that it is holistic. It builds upon the original, revolutionary Mozilla Open Badges white paper by putting the individual at the centre and decentralising recognition practices. Not all Open Recognition needs to be badged, credentialed, or endorsed, but the important thing is that it can be — and by anyone.
WAO x ePICLast year, in Vienna, WAO was represented by Laura, Anne, and me. This year, I went by myself, despite the slide below with Laura’s name on it!
I ran a 45-minute workshop on using AI tools for identifying and mapping skills against various frameworks using a custom GPT that I called ePIC Skills Mapper. I also showed the functionality in one particular badge platform on how AI can help generate badge metadata.
The slides from this workshop are available.
Julie Keane from Participate kindly helped me present in a 20-minute slot for sharing our findings from a recent project in which we were both involved.
The slides from this presentation are also available.
Open Recognition is for Everybody (ORE) CC BY-NC Visual ThinkeryWAO would like to reinvigorate the ORE community in 2025. While we’re currently still meeting on the last Tuesday of every month, we can do more in terms of spreading Open Recognition practices within the various communities and networks which we operate.
Why not join us on November 26th to reflect on ePIC and plan for next year? Click here to register — all welcome! https://lu.ma/2wq9hpuc
A Little Open Recognition Goes A Long Way was originally published in We Are Open Co-op on Medium, where people are continuing the conversation by highlighting and responding to this story.
We’ve been heads down working hard on solutions for builders at the intersection of data, AI and crypto. But we’re coming up for air to meet other builders in Bangkok this month. Whether you’re a builder, an operator, or just learning about the space, we’d love to meet you.
Where to find our crew:
Filecoin’s FIL Dev Summit – Nov 11 Devcon – Nov 12-15 Fluence’s DePIN Day – Nov 15 (RSVP to attend, and catch us at our booth)Join Proof of Data to be part of an ongoing community
Join our private Telegram group, Proof of Data, for people working on challenges related to the Web3 data ecosystem. This is a collaborative, ongoing space where you can connect with others who are interested in decentralized storage, verifiable data, data availability, identity and reputation, synthetic data, DePIN, and more.
To get the invite link, chat with one of our team members at DePIN day, or DM us on X.
Workshop
January 8, 2024
1:00 – 4:00 PM ET
Princeton University Campus • Jadwin Hall Room A7
Presented by:
Rachana Ananthakrishnan
Executive Director, Globus, University of Chicago
Jason Zurawski
Science Engagement Engineer, Energy Sciences Network (ESnet), Scientific Networking Division, Lawrence Berkeley National Laboratory
Romy Bolton
Director of Project Management, Internet2
Jean Chorazyczewski
Director – InCommon Academy, Internet2
This informative workshop offers a comprehensive overview of Science DMZ, Globus, and the InCommon Federation, focusing on how these technologies can support secure and efficient data-driven research collaborations. Participants will gain insights into how Science DMZ environments optimize network configurations for data-intensive research, explore Globus as a powerful tool for reliable data management and automation, and understand the role of InCommon Federation in simplifying identity management and resource sharing across institutions. This session provides foundational knowledge to help attendees enhance data accessibility, security, and collaborative capabilities within their research infrastructures.
Join us for this session designed to empower researchers, research computing and information technology (IT) professionals with essential insights for creating secure, high-performance data transfer frameworks that enhance collaborative research capabilities.
There is no fee to attend this workshop. Light refreshments will be served for attendees.
Register Now »Please contact Forough Ghahramani (research@njedge.net) for additional information.
We are grateful for support from the National Science Foundation.
CC* Regional Networking: Connectivity through Regional Infrastructure for Scientific Partnerships, Innovation, and Education (CRISPIE) project (NSF OAC- NSF0311528)
The post Empowering Secure Data-Driven Research: A Workshop on Science DMZ, Globus, and InCommon Federation appeared first on NJEdge Inc.
Public Review ends -- November 25th
OASIS and the TOSCA TC are pleased to announce that TOSCA v2.0 CSD07 is now available for public review and comment.
TOSCA provides a language for describing application components and their relationships by means of a service topology, and for specifying the lifecycle management procedures for creation or modification of services using orchestration processes. The combination of topology and orchestration enables not only the automation of deployment but also the automation of the complete service lifecycle management.
The documents and all related files are available here:
TOSCA v2.0
Committee Specification Draft 07
09 October 2024
Editable source:
https://docs.oasis-open.org/tosca/TOSCA/v2.0/csd07/TOSCA-v2.0-csd07.md
HTML:
https://docs.oasis-open.org/tosca/TOSCA/v2.0/csd07/TOSCA-v2.0-csd07.html
PDF:
https://docs.oasis-open.org/tosca/TOSCA/v2.0/csd07/TOSCA-v2.0-csd07.pdf
For your convenience, OASIS provides a complete package of the specification document and any related files in a ZIP distribution file. You can download the ZIP file at:
https://docs.oasis-open.org/tosca/TOSCA/v2.0/csd07/TOSCA-v2.0-csd07.zip
How to Provide Feedback
OASIS and the TOSCA TC value your feedback. We solicit input from developers, users and others, whether OASIS members or not, for the sake of improving the interoperability and quality of its technical work.
The public review starts 8 November 2024 at 00:00 UTC and ends 25 November 2024 at 23:59 UTC.
Comments may be submitted to the project by any person through the use of the project’s Comment Facility. Members of the TC should submit feedback directly to the TC’s members-only mailing list. All others should follow the instructions listed here.
All comments submitted to OASIS are subject to the OASIS Feedback License, which ensures that the feedback you provide carries the same obligations at least as the obligations of the TC members. In connection with this public review, we call your attention to the OASIS IPR Policy [1] applicable especially [2] to the work of this technical committee. All members of the TC should be familiar with this document, which may create obligations regarding the disclosure and availability of a member’s patent, copyright, trademark and license rights that read on an approved OASIS specification.
OASIS invites any persons who know of any such claims to disclose these if they may be essential to the implementation of the above specification, so that notice of them may be posted to the notice page for this TC’s work.
Additional information about the specification and the TOSCA TC can be found at the public home page here.
Additional references:
[1] https://www.oasis-open.org/policies-guidelines/ipr/
[2] https://www.oasis-open.org/committees/tosca/ipr.php, https://www.oasis-open.org/policies-guidelines/ipr#RF-on-Limited-Mode
Intellectual Property Rights (IPR) Policy
The post Invitation to comment on TOSCA v2.0 appeared first on OASIS Open.
Authors: OriginTrail Core Developers
Date: November 8th 2024
Since the inception of AI in the 1960s, two main approaches have emerged: neural network-based AI and symbolic AI. Neural networks are statistical systems that generate outputs by detecting patterns in training data, while symbolic AI employs deterministic models with explicit knowledge representations and logical connections. Today, transformers within the Large Language Model (LLM) group dominate neural networks, while knowledge graphs are the leading technology in symbolic AI for representing structured knowledge.
Used alone, each approach has limitations. Neural networks are probabilistic and can produce unwanted outputs (hallucinations), risk intellectual property issues, exhibit biases, and face model collapse with a growing amount of AI-generated (training) data online. Symbolic AI, meanwhile, is constrained by its rule-based reasoning, limiting creativity and user experience. Hybrid neuro-symbolic systems combine the strengths of both, leveraging neural networks’ usability and creativity while grounding them in knowledge graphs. This approach can enhance reliability, mitigate biases, ensure information provenance, and promote data ownership over IP risks.
OriginTrail Decentralized Knowledge Graph (DKG), together with NeuroWeb (the AI — tailored blockchain) is surfacing as one of the key components of the symbolic AI branch, enhancing knowledge graph capabilities with the trust of blockchain technology, and powering, Collective Neuro-Symbolic AI.
This RFC addresses the following key development milestones to further enhance the Collective Neuro-Symbolic AI and will serve as a basis for one of the most extensive roadmap updates to date:
DKG V8 Testnet results and learnings, DKG Core and Edge Nodes Economics, Collective Programmatic Treasury (CPT), DKG V8 Mainnet launch in DecemberAfter reading the following OT-RFC-21, you may leave your comment here: https://github.com/OriginTrail/OT-RFC-repository/issues/47
“Show me the incentives, and I’ll show you the outcome.”The quote by Charlie Munger speaks to the importance of setting the right incentives in any system. As the DKG network matures in scalability and adoption, the incentives can become more refined in their implementations and more aligned with supporting the key metric — growth of usage of the DKG network.
There are multiple roles in the OriginTrail ecosystem that are incentivized with both TRAC and NEURO. TRAC is incentivizing Core node operators and TRAC delegators while NEURO incentivizes Neuroweb blockchain (Collator) node operators, NEURO delegators, and knowledge publishers (henceforth best represented by DKG EDGE node operators) for incentivized paranets.
The establishment of Collective Programmatic Treasury (detailed in a dedicated section below) will give the most active DKG paranets, by volume of new knowledge assets published to the DKG, an opportunity to take part in building the future of the technology.
The incentives updates and novelties will be released as a part of the DKG V8 mainnet release.
DKG V8 testnet results and learningsIn the first 5 weeks since the DKG V8 Testnet launch, the community has deployed over 500 V8 core nodes, which as part of the incentive program submitted over 3.7 terabytes and 13.7B lines of core node operational logs, and over 8 million Knowledge Assets published. These have proven very valuable inputs for the core developers who have introduced several optimizations to the DKG based on the submitted telemetry, including performance boosts on the new paranet syncing features, testing curated paranets, and other performance updates.
Chart of log lines submitted by V8 Core Nodes telemetryThe number of nodes on the V8 testnet highlights another key insight: even with a fixed reward budget of 100k TRAC, which was allocated to test the behavior of V8 Testnet Core Nodes, achieving an economically viable node count requires the full implementation of the DKG delegated staking feature. Delegated TRAC acts as a market mechanism to balance the node count according to the rewards available in the network at any time. This underscores the critical role TRAC delegators will play in maintaining stability and economic balance within the V8 DKG ecosystem.
As the initial phase of the V8 testnet wraps up, advancing V8 features and validating them requires an environment where all economic incentives are active to support the full deployment of the DKG V8. Key V8 components, such as the Edge Node and Core Node, will now continue to be deployed and optimized on the V6 mainnet, with the V8.0 mainnet launch set for December this year. This launch will initiate the Tuning Period, during which V8 will gain enhanced performance with features like Batch Minting, Random Sampling, and a new staking interface, all backed by real economic incentives.
In addition, synergistic effects between publishers (represented by DKG Edge Nodes, once the V8 network is deployed) and Core DKG Nodes will be fostered through horizontal scaling. This approach aims to refine network signaling, enabling an optimal network size by aligning the number of nodes more precisely with network demands.
The details in the following chapters of this RFC create a level playing field to prepare for updates on existing incentives on the DKG Core node and access to Collective Programmatic Treasury (CPT).
DKG Core and Edge Nodes EconomicsThe DKG V8 has been designed with major scalability improvements at multiple levels, with a prototyped implementation tested in collaboration with OriginTrail ecosystem partners from data-intensive sectors.
The major advancement that DKG V8 is making is in expanding the OriginTrail ecosystem’s product suite to two key products:
DKG Core Node V8 — highly scalable network nodes forming the network core, persisting the public replicated DKG DKG Edge Node V8 — user-friendly node applications tailored to edge devices (phones, laptops, cloud, etc)**The expansion to more devices is intended to be based on ecosystem builders’ capacity and market needs.
Internet scale with DKG Edge nodesEdge nodes enable the DKG to reach every part of the internet we know today — any device, any user, any chain. Being a light-weight version of the DKG node, Edge nodes can support both accessing the private and public knowledge on the DKG as well as publishing new knowledge.
Having this capability, DKG Edge node is a very useful tool:
for paranet operators to enable knowledge miners to publish new knowledge onto their paranets; for solution builders as a flexible interface for their neuro-symbolic AI products that can access both private and public parts of the DKG; for DKG Edge node operators that want to start publishing to the DKG so they could transform their DKG Edge node into a DKG Core node.The continuation of V8 development focuses on teams looking to deploy their paranets & Edge nodes on DKG Mainnet to generate substantial usage. Therefore, the DKG Edge Node Inception Program budget of 750k TRAC is dedicated to builders launching paranets on both the V6 and V8 mainnet, with up to 100k TRAC per builder available as reimbursement for TRAC used for publishing to a particular paranet.
More details on how you can apply for the DKG Edge Node Inception Program can be found here.
Horizontal scaling with DKG Core nodesThe backbone of the DKG network in V8 is formed of DKG Core nodes, whose purpose is to ensure secure hosting of the public DKG and facilitate network communication in a decentralized fashion. DKG Core nodes are incentivized through competing for DKG publishing fees in TRAC tokens, which are distributed among the best performing nodes in the network.
The success of a Core node in capturing fees in DKG V6 is currently a function of 3 factors: (1) node uptime and availability, (2) total TRAC stake delegated to a node, and (3) network hash distance (enabling efficient knowledge content addressing).
Several learnings have been acquired in V6 through the period of the system running in production, most notably on how to improve scalability and further fine-tune the incentive system for DKG growth, by updating the relevant parameters in the tokenomics formula.
Particularly, the community of node operators has been indicating the hash distance factor as the most problematic one, causing randomization and impacting the system in an unpredictable and asymmetric way (the nodes with the same amount of stake and uptime could perform differently in terms of rewards due to a different hash ring network position).
On the other hand, the builders’ feedback is that the friction to contributing to the DKG needs to be significantly lower, specifically in terms of publishing price per knowledge asset (addressed with scalability) and accessibility to publishing through available nodes, expressing the need for an approach similar to blockchain RPC services, which allow sending transactions to the blockchain without running a blockchain node.
Therefore V8 introduces an updated Core node incentive system with the following factors:
Node uptime & availability, in positive correlation, as nodes need to prove their commitment of hosting the DKG by submitting proofs to the blockchain (through the new V8 random sampling proof system), TRAC Stake security factor, in positive correlation — the more stake a node attracts, the higher the security guarantees and therefore the higher chance of rewards (same as in V6), Publishing factor, in positive correlation — the more new knowledge has been published via a specific core node (measured in TRAC tokens), the higher the chance of rewards, Node fee (formerly “ask”), in negative correlation — the nodes with lower fees are positively impacting the system scalability, and therefore have a higher chance of rewards.The illustrative incentive formula is therefore:
where the specific functions are to be validated on both the testnet (for technical functionality) and mainnet (for market functionality) during the V8 Tuning period.
This addition creates further alignment of Core nodes with the ecosystem growth as Core nodes that take up roles of driving adoption will become more successful. Importantly, it also creates an aligned horizontal scaling approach, since additional Core nodes in the DKG become required with growing adoption. This creates a positive self-reinforcing feedback loop: new adoption leads to new nodes, which leads to increased scale, which unlocks further adoption. We can imagine core nodes almost acting as a “solar panel” that allows publishers to capture TRAC fees from the network so they could use it for their publishing needs.
Network security via stakingTRAC delegators are using their TRAC to secure the DKG network by delegating it to selected Core nodes. In exchange for a delegation (and increasing the core node’s chance of capturing rewards), the node operator splits a part of the captured rewards with the delegators. When selecting the core node to support, the delegators take all the key elements of a successful Core node into account which will, from DKG V8 onwards, include the amount of knowledge added to the DKG.
NEW: Those who use it, will build it: 60MM TRAC Collective Programmatic Treasury (CPT)To achieve that those who use the network have incentives to build it in the future, the future development fund will be deployed as a 60MM TRAC Collective Programmatic Treasury (CPT). The Collective Programmatic Treasury will be implemented with a programmatic release schedule emitting TRAC to eligible builders. The release schedule will follow the most famous example of emissions in the cryptocurrency space, that of the Bitcoin halving with minor alterations. The TRAC released from Collective Programmatic Treasury will be dedicated to (both conditions should be fulfilled) those who:
use TRAC tokens for publishing knowledge (paranets spending the most TRAC for publishing knowledge), AND have been confirmed eligible for incentives by the community (paranets who have completed successful IPOs and are deployed on NeuroWeb). The scheduleAs mentioned above, the schedule draws inspiration from likely the most influential schedule process in Crypto, Bitcoin halvings. The halvings principle dictates that half of the outstanding amount is to be distributed in each following period in equal amounts throughout that period. While BTC halvings are set at 4 years, our schedule proposal is to set this period for 2 years in the case of TRAC. That said, the emissions schedule would be as follows:
The Collective Programmatic Treasury will be deployed on the NeuroWebAI blockchain and will allow paranet operators to trigger Collect reward transactions which will calculate the amount of rewards they are eligible for and pay it out accordingly.
The distributionThe distribution amounts will be tied to the core principle of “Those who use it, will build it”. The metric which will, therefore, define the amount of TRAC that a builder (represented by their paranet) will receive, is tied to their TRAC spending for creating knowledge on the DKG. A simple example would be as follows:
Paranet A spent 1,000 TRAC Paranet B spent 2,000 TRAC Paranet C spent 3,000 TRACCollective Programmatic Treasury amount for the period: 600 TRAC
Paranet A: 100 TRAC reward Paranet B: 200 TRAC reward Paranet C: 300 TRAC reward*all numbers are placeholders, just exemplifying the relationship between the spent and received amounts..
The Collective Programmatic Treasury will be observing DKG network usage on the innovation hub of OriginTrail ecosystem, the NeuroWebAI blockchain, thus applying only to NeuroWebAI hosted paranets.
The eligibility & humans in the loopNot every paranet on NeuroWebAI is by default eligible for the TRAC dev fund emissions. In order to achieve that status, a paranet must have been voted in via the IPO process, gaining support by the NeuroWebAI community through a NEURO on-chain governance vote. In this way, the community collectively decides on the dev fund & NEURO incentive emissions, transparently implementing the “humans in the loop” system via on-chain governance.
The Collective Programmatic Treasury (CPT) is expected to be implemented in March 2025.
DKG V8 release timelineNovember
DKG V8 testnet layer 1 completed OT-RFC-21 release DKG V8 Edge node Inception program startDecember
DKG V8.0 Mainnet and Tuning period launch Neuroweb collator stakingJanuary 2025
DKG V8.1, Tuning period endsFebruary 2025
Neuroweb TRAC Bridge made availableMarch 2025
DKG V8.2 release — Collective Programmatic TreasuryOT-RFC-21 Collective Neuro-Symbolic AI was originally published in OriginTrail on Medium, where people are continuing the conversation by highlighting and responding to this story.
Digital Identity New Zealand (DINZ) is pleased to see the Government’s latest steps toward establishing safe and secure digital identity services, with the official publication of the Digital Identity Services Trust Framework (DISTF) Rules, effective today. This marks a significant milestone for digital identity in New Zealand, where DINZ has contributed its experienced advice from the outset.
The Complexity of Digital Identity
Building a robust yet adoptable digital identity framework is complex and demands ongoing collaboration and input. DINZ has been a part of this journey from the beginning, providing our expertise to help shape a framework that can work for New Zealand’s diverse communities and business needs.
Support for Our Members
As we pivot our DISTF Working Group’s focus from policy submissions to supporting members in adoption, DINZ is committed to helping organisations navigate their accreditation journey. For those engaging with the Trust Framework Authority, DINZ offers a range of resources to support their adoption and accreditation, including DISTF education and awareness sessions through our sustained and constructive engagement with the Department of Internal Affairs (DIA). Additionally, through our partnership with InformDI (sponsored by DINZ member NEC) has made its online educational resource available and provided at no cost to DINZ members through March.
Open for Business – Together
The publication of the Trust Framework Rules enables the Trust Framework Authority to be “open for business,” and DINZ stands ready to support members as they work towards providing secure and trusted digital identity solutions. Together, we are building a safe, secure digital identity ecosystem that supports both privacy and innovation in New Zealand.
Learn More
To find out more about our mahi, our DISTF Working Group, sample our DISTF awareness sessions, or member access to our educational resources, visit our website.
The post DINZ Welcomes the Publication of the Digital Identity Services Trust Framework Rules appeared first on Digital Identity New Zealand.
Younger generations see passwords as outdated and are opting for passkeys, a FIDO-backed technology offering more secure, passwordless authentication. With increasing support from popular apps and services, young users are helping to drive this transition towards safer, FIDO-endorsed security solutions.
“Consumer expectations are changing, and this data should serve as a clear call to action for brands and organizations still relying on outdated password systems,” noted Andrew Shikiar, CEO at FIDO Alliance.
“Consumers are actively seeking out and prefer passwordless alternatives when available, and brands that fail to adapt are losing patience, money, and loyalty – especially among younger generations.”
“When consumers know about passkeys, they use them. Excitingly, 20% of the world’s top 100 websites and services already support passkeys. As the industry accelerates its efforts toward education and making deployment as simple as possible, we urge more brands to work with us to make passkeys available for consumers. The pace of passkey deployment and usage is set to accelerate even more in the next twelve months, and we are eager to help brands and consumers alike make the shift,” Shikiar concluded.
The FIDO Alliance’s fourth annual Online Authentication Barometer reveals significant growth in awareness and adoption of passkeys, with 57% of surveyed consumers now familiar with the technology (up from 39% in 2022). As awareness increases, FIDO is urging more brands to adopt passkey support to help combat the rising sophistication of online threats and scams.
The post Roundtable on Verifiable Credentials: Trust and Truth in an AI-enabled Talent Acquisition Mark appeared first on Velocity.
With disruptions becoming more frequent, companies must adapt or risk falling behind.
To stay ahead, many are embracing new technology, with AI emerging as a powerful tool for enhancing supply chain agility and resilience.
In this episode, Steve Hochman, VP of Research at Zero100, joins hosts Reid Jackson and Liz Sertl to talk about the key trends shaping the future of supply chains. He highlights the need for organizations to adapt by improving cross-functional collaboration and leveraging artificial intelligence.
In today’s rapidly changing global environment, organizations must focus on their people, processes, and technology to build lasting supply chain resilience.
In this episode, you’ll learn:
Effective ways to leverage AI for automating supply chain operations The importance of cross-collaboration for a more integrated and responsive system How to implement small-scale AI experiments for meaningful impact
Jump into the conversation:
(00:00) Introducing Next Level Supply Chain
(03:10) The rise of supply chain volatility
(08:12) Cross-functional collaboration in supply chains
(15:35) Innovation through AI experiments
(17:48) Case study: Shein’s use of AI for e-commerce
(21:07) The importance of data management
(25:40) Considering the ethical implications of AI
(31:16) Future trends of AI in supply chains
(32:39) Steve Hochman’s favorite tech
Connect with GS1 US:
Our website - www.gs1us.org
Connect with the guest:
Steve Hochman on LinkedIn
In the world of digital security, protecting sensitive data with robust encryption is essential. AWS Key Management Service (KMS) plays a crucial role in this space. It serves as a highly secure, fully managed service for creating and controlling cryptographic keys. What many may not realize is that AWS KMS itself operates as a Hardware Security Module (HSM), offering the same level of security you'd expect from dedicated hardware solutions.
How key industries are preparing for eIDAS with DIDComm
The successful formal verification of DIDComm paves the way for tremendous DIDComm adoption. To help provide an understanding of this important technology, we’ve outlined some of the industries where DIDComm can play an important role.
To learn more about DIDComm, visit DIDComm.org or join us for a meeting of the DIDComm users group. All are welcome!
In the evolving landscape of digital communication, the need for secure, private, and efficient data exchange has never been more critical. This is especially true for the businesses, governments, and individuals who will be affected by eIDAS regulation in the European Union as they seek to protect their information while also enhancing user experiences. This blog explores the concept of DIDComm, its relationship with eIDAS, and how it stands to revolutionize five key industries.
What is DIDComm?DIDComm (Decentralized Identifier Communication) is an open standard for secure and private communication using Decentralized Identifiers (DIDs). DIDs are a type of digital identifier that allow for verifiable, self-sovereign identities. Unlike traditional communication protocols, DIDComm enables peer-to-peer communication without relying on centralized authorities, tools, or platforms.
Key technical aspects of DIDComm include:
Use of DIDs for addressing and authentication End-to-end encryption for message confidentiality Decentralized architecture, allowing direct communication between parties, including the devices humans use such as phones, laptops, and tablets. Support for various transport protocols (HTTP, Bluetooth, NFC, etc.)This technology leverages advanced cryptographic techniques to ensure data integrity, privacy, and security. This is particularly beneficial in scenarios where trust and security are paramount, such as in financial transactions, healthcare data sharing, and cross-border communications.
Understanding eIDASeIDAS (Electronic Identification, Authentication, and Trust Services) is a regulation by the European Union aimed at creating a single, standardized digital identity framework across member states. eIDAS enables secure and seamless electronic interactions between businesses, citizens, and public authorities. It also ensures that electronic signatures, seals, and other trust services are recognized across all EU member states. By fostering trust and security in the digital economy, eIDAS is a key enabler of cross-border digital services and is central to the EU’s Digital Single Market strategy.
DIDComm and eIDAS: A Powerful CombinationThe combination of DIDComm's decentralized, secure communication with eIDAS's standardized, legally recognized digital identity framework opens up new possibilities across various industries. Like OID4VC, the communication protocol that is expressly mentioned in eIDAS, DIDComm can be used for exchanging verifiable credentials and initial login. However, DIDComm goes beyond that. Using DIDComm, organizations can leverage eIDAS-compliant identities for ongoing secure, private communication, while also providing a flexible infrastructure that can adapt to future changes in regulatory requirements.
In addition to the standards and specifications explicitly required by legislation, it is important to enable collaboration and communication using adjacent technologies. The best results will be obtained by using the right combination of technologies integrated seamlessly together.
1. Finance: Secure Cross-Border TransactionsIn the finance sector, the need for secure, efficient, cross-border transactions is a constant challenge. Traditional methods involve multiple intermediaries, each adding cost, delay, and potential points of failure or vulnerability. With DIDComm, financial institutions can dramatically streamline cross-border transactions by engaging in direct, encrypted communication to verify identities and transaction details, bypassing the need for intermediaries.
The reduction in intermediaries not only cuts down on transaction times and associated fees, it minimizes the potential points of failure and security vulnerabilities. This approach ensures that sensitive financial data is handled with the utmost privacy, meeting the rigorous security standards expected in the finance industry while enhancing the user experience and making it easier to do business. Furthermore, customers benefit from a more efficient, transparent process, enhancing trust and satisfaction with financial services across the EU.
Example: A European bank conducting a transaction with a bank in another EU country could use DIDComm to securely exchange verified credentials for the transaction. By integrating eIDAS-compliant digital identities and DIDComm, the banks can communicate with the relevant stakeholders on trusted channels to ensure that the parties involved are authenticated and that the transaction is legally binding across borders. This not only reduces transaction times and costs but also enhances security and privacy by limiting data exposure to unnecessary parties.
2. Industrial Machinery: Automated Event NotificationsIn industrial settings, the efficient operation of machinery is crucial to maintaining productivity. Timely notifications about the status or needs of machinery, such as maintenance requirements or operational events, can significantly reduce downtime and improve overall efficiency and security.
By leveraging DIDComm combined with eIDAS-compliant digital identities, the system ensures that only authorized personnel receive and can act on this information. This approach not only improves efficiency by ensuring that the right people are notified in real-time, but also enhances security by preventing unauthorized access to sensitive operational data. The use of DIDComm in this scenario ensures that communication is encrypted and tamper-proof, providing a trustworthy and streamlined method for managing industrial operations across large and complex environments.
Use Case: Imagine a large warehouse in Germany where various pieces of industrial machinery are operating. When a specific mechanical event occurs, such as a temperature spike in a critical component, the machinery can automatically trigger a notification. Using DIDComm, this notification can be securely communicated to interested parties, such as the maintenance team, the machinery manufacturer, and the warehouse management system.
Read More: Gaia-X Secure and Trustworthy Ecosystems with Self Sovereign Identity (Gaia-X)
3. Travel: Seamless and Secure Traveler VerificationThe travel industry relies heavily on the secure verification of travelers' identities, whether for boarding flights, crossing borders, or checking into hotels. The challenge is to ensure that this verification process is both secure and efficient, enhancing the overall travel experience while protecting personal data.
Using DIDComm, the traveler can securely share their eIDAS-compliant digital identity with the airline, airport authorities, and border control via their mobile device. This digital identity is verified in real-time, and the traveler is seamlessly cleared through each checkpoint without the need to repeatedly present physical documents.
DIDComm ensures that the traveler’s data is encrypted and only accessible by the intended parties, reducing the risk of identity theft or unauthorized access. Additionally, this process improves the user experience by speeding up the verification process and reducing wait times. The use of secure, decentralized communication also allows for more flexible travel arrangements, such as automated hotel check-ins or renting a car, where the traveler’s verified digital identity can be securely communicated directly to service providers.
Use Case: Consider a scenario where a traveler from France is flying to another EU country. Upon arrival at the airport, they need to go through multiple identity verification steps—at the airline check-in counter, security checkpoints, and immigration control. Traditionally, this involves presenting physical documents like a passport, which can be cumbersome and slow.
Read More:
Travel Digital Identity – Seamless Travel Powered by Digital Identity (Goode Intelligence) Biometric Digital Identity Travel and Hospitality (Prism) Aruba Makes Steady Progress In Launching New Digital Travel Credential 4. Supply Chain Management: Secure and Transparent TrackingSupply chain management involves coordinating numerous stakeholders, often across different countries and regulatory environments. Ensuring the authenticity and integrity of goods and documents as they move through the supply chain is critical.
Integrating DIDComm into supply chain management enhances transparency, security, and efficiency across all stages of the supply chain. By enabling secure, encrypted communication between manufacturers, suppliers, and logistics providers, DIDComm ensures that all parties have access to accurate and verified information in real time. This is particularly valuable when supply chains are not running smoothly. Communicating about part acceptance, quality measurement, and delivery schedules can benefit from standards based secure communication.
The use of eIDAS-compliant digital identities further strengthens trust, as each stakeholder's identity and credentials are authenticated and legally recognized across borders. This reduces the risk of fraud, errors, and delays, leading to a more reliable and efficient supply chain. Ultimately, businesses benefit from improved operational efficiency and reduced costs, while customers receive products that are securely tracked and delivered with greater transparency.
Use Case: A manufacturer in France could use DIDComm to securely communicate with suppliers and logistics providers across the EU, using eIDAS-compliant digital identities to verify the credentials of each party involved. This secure communication can include encrypted data about the origin, handling, and delivery and acceptance of goods, ensuring transparency and trust throughout the supply chain. By using DIDComm, supply chain managers can reduce the risk of fraud, improve efficiency, and ensure compliance with various regulatory requirements.
Read More:
Implementing Digital Product Passports using Decentralized Identity Standards (Spherity Blog) EU Digital Product Passports and Enabling Compliance in the US Pharmaceutical Supply Chain (DIF Blog) Identity Use Case Spotlight Supply Chains (Indicio Blog) 5. Government Services: Secure Citizen-to-Government InteractionsGovernments are increasingly moving towards digital services to improve efficiency and accessibility. However, ensuring that these services are secure and that citizens’ data is protected is a significant challenge.
Implementing DIDComm for citizen-to-government interactions revolutionizes how public services are accessed and utilized. It becomes more convenient for citizens to share information and their communications with government entities are encrypted and protected from unauthorized access. The integration of eIDAS-compliant digital identities ensures that these interactions are not only secure but also legally recognized across EU member states. This results in a more efficient public service system, where processes such as applying for permits or accessing social services are streamlined, reducing administrative burdens and enhancing the overall user experience for citizens.
Use Case: A citizen in Italy could use their eIDAS-compliant digital identity to securely communicate with government agencies via DIDComm. This could include applying for permits, submitting tax information, or accessing public services. DIDComm ensures that all communications are encrypted and that the citizen’s data is only accessible by the intended government agencies. This use case demonstrates how DIDComm can enhance the security and privacy of digital government services while providing a better user experience by simplifying and streamlining interactions with public authorities.
Read more:
Enhancing European Interoperability Frameworks to Leverage Mobile Cross-Border Services in Europe (Association for Computing Machinery) Preparing for eIDAS and beyondDIDComm represents a powerful tool for enhancing security, privacy, and efficiency across various industries that are preparing for an eIDAS future. By enabling secure, decentralized communication and integrating legally recognized digital identities, these technologies are set to revolutionize how businesses, governments, and individuals interact in the digital world.
As these technologies continue to develop and gain adoption, we can expect to see significant improvements in the way sensitive data is handled, trust is established, and user experiences are delivered across borders.
November 2024
DIF Website | DIF Mailing Lists | Meeting Recording Archive
Table of contents Decentralized Identity Foundation News; 2. Working Group Reminders; 3. Open Groups; 4. Announcements at DIF; 5. Community Events; 6. DIF Member Spotlights; 7. Get involved! Join DIF 🚀 Decentralized Identity Foundation News DIDComm: Formal Verification and Strengthened SecurityDIDComm is a cornerstone protocol in Self-Sovereign Identity (SSI), enabling private, authenticated messaging between entities using Decentralized Identifiers (DIDs). In a breakthrough study, researchers have completed the first formal security analysis of DIDComm, marking a crucial milestone in verifying the protocol's security promises. The paper, "What Did Come Out of It? Analysis and Improvements of DIDComm Messaging", by Christian Badertscher, Fabio Banfi, and Jesús Díaz Vico not only validates DIDComm's core security model but also introduced significant improvements:
Formal proof of DIDComm's anonymity and authenticity goals A new encryption algorithm that doubles performance while maintaining security Enhanced privacy protections that minimize information leakageThe findings position DIDComm as a rigorously verified protocol ready for widespread adoption in secure identity frameworks and data spaces. Dr. Carsten Stöcker, CEO of Spherity GmbH, highlights DIDComm's role in facilitating secure peer-to-peer communication, highlighting relevance to business-to-business (B2B) and machine-to-machine (M2M) communications.
DIF at IIW 39: Leading Privacy, Interoperability, and Practical Solutions in Decentralized IdentityAt Internet Identity Workshop (IIW) 39, DIF leadership and members joined with global identity experts to address some of today’s most pressing digital identity challenges. Sessions led by the DIF community underscored its commitment to building secure identity systems that foster privacy, trust, and digital interactions. Here are key takeaways and themes from the event:
Verifiable AI: Proof of Approved AI Agent, Proof of Personhood, Content Credentials… and much more
The complex interdependencies of AI and digital identity were a hot topic at IIW, with DIF's leaders and members leading the discussions. Ankur Banerjee of cheqd lead a comprehensive discussion in the session Verifiable AI, focusing on ways to enable AI transparency and accountability in training data, models, and delegation chains.
This conversation continued across many sessions, covering Personhood Credentials issuance, schemas, risk frameworks, and legal implications in sessions featuring Otto Mora, Andor Kesselman, Steve McCown, and Linda Jeng.
DIF's Kim Duffy covered DIF's extensive work on Personhood Credentials (PHCs), including DIF's co-authorship of the Personhood Credentials paper with OpenAI, Microsoft, Harvard Society of Fellows, and more; PHC schema design, use cases, and risk frameworks; and upcoming Applied Crypto work item incorporating zero-knowledge proofs (ZKPs) for PHCs.
Come Build Your Identity Project With DIF Labs
Andor Kesselman, Ankur Banerjee, and Kim Duffy gave an update on the DIF Labs initiative, a program designed to accelerate practical implementation of decentralized identity technologies. This initiative addresses a crucial gap in the ecosystem between standards organizations and traditional incubators.
DIF Labs will provide a "safe space" for builders to collaborate on real-world projects, offering protection access to industry experts, and project evangelism support. Unlike traditional incubators or standards bodies, DIF Labs focuses on rapid development of practical solutions without taking equity or getting bogged down in lengthy standardization processes.
Golda Velez discussed her Linked Claims project, which will be part of DIF Lab's first cohort. Stay tuned to learn more about this and DIF Labs.
Privacy-First Identity Design
Privacy was a central theme across sessions, with DIF leaders and members emphasizing the need for identity systems that put user control and transparency first. Steve McCown joined Denise Farnsworth in leading the session “Verifiable ID with the State of Utah - Why are we different?’ to discuss Utah’s leadership in creating digital identity for residents that respects privacy from the ground up.
New DIF Associate Member Ken Griggs led a session on Anonymity vs Privacy, leading a nuanced discussion with technical and societal impacts.
DIDComm v2 Update
Sam Curren of Indicio presented the latest on DIDComm v2, a protocol for secure, transport-agnostic, peer-to-peer communication. DIDComm v2’s design enables flexible, private credential exchange across platforms. His presentation showed how DIDComm v2 not only strengthens interoperability but also lays a foundation for scalable, secure identity interactions. Sam highlighted DIDComm’s recent formal verification and security improvements. Stay tuned to learn about DIF’s upcoming DIDComm Interop-athon.
Credential Schemas for Interoperability
Otto Mora and Kim Duffy led discussions on DIF's credential schemas, including the Basic Person Schema. Credential schemas play a critical role in ensuring interoperability across finance, telecom, healthcare, and other sectors by creating a consistent structure for credentials.
Decentralized Identity in the Music Industry
Cole Davis from Switchchord shared insights into how decentralized identity is transforming the music industry by facilitating secure, direct engagement between artists and fans. This approach enables artists to retain control over their content, sidestepping intermediaries and providing a more transparent, direct relationship with their audience. The session demonstrated the broad potential for decentralized identity to bring fairness and trust to new areas.
DID Method Standardization
Markus Sabadello, Alex Tweeddale, and Kim Duffy led a session on DID Method Standardization, aimed at promoting maturity and adoption of the W3C Decentralized Identifier specification. Kim gave an update on DIF's upcoming DID Method Standardization Working Group, with joint participation from W3C, Trust Over IP Foundation, and International Association For Trusted Blockchain Applications (INATBA). Markus provided an update on DIF's DID Traits specification, which helps implementors choose DID methods according to method characteristics. Alex emphasized the importance of reducing complexity associated with selecting DID methods, and the disscusion centered around ways to balance the need for curation along with open, transparent processes.
SSI 101
IIW would not be complete without Limari Navarrete's SSI 101 session, describing the technical standards and – more importantly – foundational principles aimed at enabling human agency and privacy in our digital interactions.
🛠️ Working Group Reminders 💡Identifiers and Discovery Work GroupIdentifiers and Discovery meets bi-weekly at 11am PT/ 2pmET/ 8pm CET Mondays
🪪 Claims & Credentials Working GroupThe Credential Schemas work item meets bi-weekly at 10am PT / 1pm ET / 7pm CET Tuesdays
🔐 Applied Crypto WGThe DIF Crypto - BBS work item meets weekly at 11am PT/2pm ET /8pm CET Mondays
📦 Secure Data StorageDIF/CCG Secure Data Storage WG - DWN Task Force meets bi-weekly at 9am PT/12pm ET/6pm CET Wednesdays
If you are interested in participating in any of the Working Groups highlighted above, or any of DIF's other Working Groups, please click join DIF.
📖 Open Groups at DIF Veramo User GroupMeetings take place weekly on Thursdays, alternating between Noon EST / 18.00 CET and 09.00 EST / 15.00 CET. Click here for more details
🌏 APAC/ASEAN Discussion GroupThe DIF APAC call takes place Monthly on the 4th Thursday of the month. Please see the DIF calendar for updated timing.
🌍 DIF Africa SIGThe inaugural meeting of the DIF Africa Special Interest Group (SIG) kicked off with introductions by co-chairs Gideon Lombard of DIDx and Jack Scott-King of Vera Innovations, who emphasized the SIG’s goals: to raise awareness, encourage collaboration, and promote decentralized identity solutions tailored to Africa's needs.
The DIF Africa SIG call takes place Monthly on the 3th Wednesday of the month, 1pm SAST. Please see the DIF calendar for updated timing.
🌍 DIF China SIGThe DIF China SIG group recently launched an AI+DID research group. The chair of DIF China SIG, Xie Jiagui and co-chair, Professor Senchun Chai from Beijing Institute of Technology, jointly announced the formation of the group, aimed at exploring the intersection of Artificial Intelligence (AI) and Decentralized Identity to address growing challenges in digital identity and privacy.
📢 Announcements at DIF DIF HackathonThe DIF Hackathon is entering its final week. See our playlist for in-depth challenge descriptions, including insights into the transformative solutions our participants will deliver.
Sessions include:
DIF Hackathon 2024 Opening Session Intro to Decentralized Identity Ethereum Foundation | PSE Hackathon Challenge NetSys Challenge | Seamless Traveler Experience Ontology's ONT Login Challenge TBD's Challenge: Known Customer Credentials (KCC) ArcBlock Framework and Tools for Building with DID/VC Pt. 1 Building Decentralized Identifier (DID) Applications with ArcBlock Pt 2 Building SSI Solutions: An Introduction to the Truvity SDK Pinata File Based Solutions A Deep Dive on Decentralized Identifiers Resolve DIDs and Verify VCs for Free with VIDOS Anonyome Labs Challenge: Personhood Credentials Crossmint’s Reusable Identity Challenge Components for Secure Identity and Information Verification with Extrimian Future of Education & Workforce Challenge Set Harnessing Decentralized Identity for Verifiable AI with cheqd 🗓️ ️DIF Members Extrimian / Quark ID in the NewsDIF Associate Member Extrimian and Quark ID received accolades as over 3.6M Buenos Aires citizens received secure identity documents based on decentralized ID and ZKPs.
Buenos Aires is the first city to issue decentralized IDs to its citizens!
— ZKsync (∎, ∆) (@zksync) October 22, 2024
By integrating @Quark_ID into the miBA platform, over 3.6M Buenos Aires citizens now have control over their documents via blockchain and ZK proofs, with Era serving as the settlement layer.… pic.twitter.com/QS4lhvjpE8
This effort received attention from Vitalik Buterin, resulting in this call to action.
Glad to see our South American collaborators, including @Quark_ID and @Extrimian, being recognized! Check out the open challenge they’ve presented during the @decentralizedID hackathon: https://t.co/Afa3kO3s3s https://t.co/Ub7wfliwvG
— DIF (@DecentralizedID) October 22, 2024
👉Are you a DIF member with news to share? Email us at communication@identity.foundation with details.
New Member Orientations
If you are new to DIF join us for our upcoming new member orientations. Please see our contact information below for notifications on orientations and events.
🆔 Join DIF!If you would like to get in touch with us or become a member of the DIF community, please visit our website or follow our channels:
Follow us on Twitter/X
Join us on GitHub
Subscribe on YouTube
🔍
Read the DIF blog
New Member OrientationsIf you are new to DIF join us for our upcoming new member orientations. Find more information on DIF’s slack or contact us at community@identity.foundation if you need more information.
Since launching AutoGreenCharge, an app aimed at reducing carbon emissions from electric vehicles (EVs), many early users and corporate partners have asked us to explain how it works — especially how it uses Energy Attribute Certificates (EACs). Today, we’ll break down what AutoGreenCharge does and how it helps EV drivers, fleet owners, and anyone interested in decarbonizing EV charging and the grid as a whole.
Understanding Energy Attribute Certificates (EACs)
First, let’s talk about electricity markets and EACs. These certificates are used to track and trade the environmental benefits of renewable energy. EACs represent the “green” qualities of renewable electricity, such as how it was produced (wind, solar, etc.), where it was generated, and when it was created. For every 1 megawatt-hour (MWh) of renewable electricity, one EAC is issued. These certificates can be sold separately from the actual electricity, so people or businesses can support renewable energy without being directly connected to a renewable power plant.
When someone wants to claim they’ve used renewable electricity, they “retire” the certificate in a registry, ensuring that only one person or business can take credit for that specific green energy. Governments and companies worldwide have been using EACs for years to help make their energy use cleaner.
Different regions have their own versions of EACs:
Europe uses Guarantees of Origin (GOs), which prove electricity came from renewable sources. North America uses Renewable Energy Certificates (RECs), similar to GOs, but often certified by a trusted third party. Other countries, like Australia and Japan, have their own systems that work in a similar way. International Renewable Energy Certificates (I-RECs) are used in many emerging economies and track renewable energy across borders.EACs can be bought and sold on various platforms, and once purchased, they can be retired to claim a decrease in emissions.
How AutoGreenCharge Works
AutoGreenCharge simplifies the process of matching EACs to EV charging sessions. It takes into account where and when you charge your car, how much electricity you use, and then automatically selects and purchases EACs to offset any non-renewable energy in that session. Since most EV charging sessions use less electricity than a full MWh (the standard size of an EAC), AutoGreenCharge splits the certificates into smaller pieces to match your charge. This process is verified by Energy Web’s Worker Node network, which ensures everything is tracked accurately.
AutoGreenCharge also has a new feature for enterprise: “Bring Your Own EAC.” This allows companies to use EACs they’ve already purchased. If they don’t want to handle it themselves, the app can take care of buying and matching the right certificates for them. For example, the first certificates purchased by Energy Web for the app were high-quality wind RECs from the U.S.
Addressing Common Questions
One AutoGreenCharge tester asked the following: “How do you know I am not charging my vehicle using a diesel generator?”
First, we should note that while it is possible to run a diesel generator to charge a vehicle, it’s a fairly rare occurrence as it doesn’t make economic sense. Charging an EV with a diesel generator is expensive, and buying EACs to greenwash it would be even more costly. However, we can consider this scenario as an example to demonstrate how AutoGreenCharge works. If the app doesn’t have any further information about the physical grid connection of the charging station in use, it will assume that none of the electricity used was renewable. So in this scenario, an EAC would be matched to the full charging session volume to make sure it’s covered by 100% renewables.
EACs give control to each individual and organization to contribute to the success of renewable facilities by attracting more investments into a greener grid which will replace emitting electricity generators over time. AutoGreenCharge makes every charge event supportive of renewable energy — at least somewhere in your EAC region.
Limitations of EACs TodayWhile EACs are great for individual action supporting renewables, they have some limitations. For example, they usually don’t specify the exact time of electricity generation — just the year. They also don’t always account for physical grid limitations, meaning you could buy an EAC from another region even if it doesn’t match the electricity connection where you’re located.
Still, EACs are currently the most widely used system for tracking renewable energy. Efforts are already underway to make EACs more detailed. AutoGreenCharge supports several types of emerging protocols, like Trusted Green Charging and DIVE, which provide a more accurate environmental impact for specific charging events.
We’re excited about the future of AutoGreenCharge and welcome feedback! If you’re interested in learning more or want to share ideas, install the app for Android and iOS now or reach out to us.
Energy Web Insights: AutoGreenCharge, explained was originally published in Energy Web on Medium, where people are continuing the conversation by highlighting and responding to this story.
This seamless experience is now possible thanks to HiTRUST’s latest collaboration with Taiwan’s leading travel platform, Colatour. Building on nearly a decade of trusted partnership, HiTRUST and Colatour have launched an innovative passwordless solution. Powered by global FIDO standards, it redefines the security of digital travel booking platforms.
Passkey Authentication On ColatourNowadays, in a fast-paced digital world, where real-time interactions and personalized travel experiences are a must; it’s essential for businesses to provide secure and user-friendly customer journeys. As cyber threats escalate, targeting personal and financial data, HiTRUST is leveraging the FIDO Alliance’s global standard for passwordless authentication, backed by industry giants like Apple, Google, and Microsoft.
Colatour users can now bid goodbye to passwords. HiTRUST’s FIDO-based solution replaces them with a more secure alternative: biometrics. Whether it’s a fingerprint or facial recognition, users can authenticate instantly without passwords into Colatour’s online platform. On the web version, this method is compatible with all major browsers, making it easy for users to access.
Supported by the FIDO Alliance and technology leaders like Apple, Google, and Microsoft, Passkeys transform online credential management by synchronizing devices within the same ecosystem, removing the need to re-register when upgrading or switching between devices. This ensures a simple, secure, and convenient user experience.
Registration Process
Passwordless Login Process
Mitigating Cyber Threats on Tourism PlatformsWith HiTRUST’s passwordless authentication, Colatour’s users can enjoy a stress-free experience—no more complex passwords to remember or fear of account theft through phishing attacks. Instead, users authenticate securely using their unique individual biometrics, ensuring peace of mind across all devices.
For Colatour, FIDO secures customer accounts by preventing hacks and data leaks. With biometric authentication, it blocks fraudsters, lowers fraud risks, and builds stronger customer trust and safety.
On the other hand, Colatour users benefit from this advanced approach by replacing passwords with biometric authentication, providing a secure login and seamless experience. Users can easily log in to the website or app using facial recognition or fingerprint authentication, eliminating the hassle of entering account details while enhancing security. This creates a fast and safe digital tool for travelers, ensuring personal data and travel itineraries are protected from hackers and fraud.
Gaining a First-Mover Advantage with Passwordless TechnologyOur partnership sets a new standard for secure, seamless user experiences in the travel industry. As more sectors adopt this innovative approach, Colatour leads the way. Not only can B2C members benefit from FIDO, but Colatour also offers B2B members access to biometric authentication on their website and app. Clients can easily log in with facial recognition or fingerprint authentication, ensuring a safer, worry-free travel experience and boosting customer engagement. By implementing advanced security measures like passwordless authentication, Colatour not only protects customers from potential fraud but also strengthens trust and loyalty. HiTRUST remains committed to delivering cutting-edge solutions, safeguarding Colatour and its travelers, and paving the way for a secure future in the travel industry.
About Colatour Travel Service CO., LTD.Founded in 1978, Colatour Travel Service CO., LTD. is Taiwan’s largest travel agency in terms of group tours and a leading brand in the travel industry. With over 1,400 employees, Colatour operates one of the highest-traffic B2C websites and numerous physical stores. It is also the largest wholesale travel company in Taiwan. Over the past 40 years, Colatour has served more than 10 million outbound group travelers and issued hundreds of millions of airline tickets, earning numerous awards as a top partner from airlines, resorts, and hotels. The ColatourGroup includes Colatour Travel, Comfort Travel Service, and Polaris Travel Service.’
Discover more about how HiTRUST and Colatour are transforming the future of travel security:
TTN Media Article | 搶攻會員經濟可樂旅遊全新「可樂幣」回饋上線!
Global FIDO Alliance study reveals latest consumer trends and attitudes towards authentication methods and their perceived online security
Passkey familiarity growing – Just two years after passkeys were first announced and started to be made available for consumer use, awareness has risen by 50%, from 39% familiar in 2022 to 57% in 2024 Password usage stagnates as consumers favor alternatives – The majority of those familiar with passkeys are enabling the technology to sign in. Meanwhile, despite passwords remaining the most common way to log in, the number of people using passwords across use cases declined as alternatives continue to rise in availability Waning password patience is costing sales and loyalty, especially among younger consumers – 42% of people have abandoned a purchase at least once in the past month because they could not remember their password. This increases to 50% for those aged 25-34 versus just 17% of over 65s Online scams and AI alarming consumers – Over half of consumers reported an increase in the number of suspicious messages they notice and an increase in scam sophistication, driven by AI. Younger generations are even more likely to agree, while older generations remain unsure how AI impacts their online security29 October 2024 – The FIDO Alliance today publishes its fourth annual Online Authentication Barometer, which gathers insights into the state of online authentication and consumer perceptions of online security in ten countries across the globe.
Key findings
The research revealed promising consumer momentum building around passkey adoption and clear signs people are recognizing the limitations of passwords and are choosing passwordless alternatives, like passkeys, where available. In the two years since passkeys were first announced, global awareness has jumped by 50%, rising from 39% familiar in 2022 to 57% in 2024. Awareness is driving adoption too – the majority of those familiar with passkeys (62%) are using them to secure their apps and online accounts.
The data also revealed the cost to organizations still relying on legacy password sign-ins – especially among younger generations. 42% abandoned a purchase in the last month due to a forgotten password, rising to over half of those under 35. Similarly, over half of consumers (56%) have given up accessing a service online because they couldn’t remember a password in the last month, rising to 66% of those under 35.
The survey revealed other clear signs that password usage and trust are stagnating globally as more secure and user-friendly passwordless alternatives become available. Overall, the number of consumers entering a password manually across use cases decreased again from 2023, while biometrics ranked the authentication method consumers consider the best login experience and the method they consider most secure for the second year running.
When consumers were asked about how they have improved account security in the last year, numbers continued to decline among those who increased the complexity of a password, while those choosing biometrics and using authenticator apps have steadily risen.
Passkeys at two: the road to mainstream
“Consumer expectations are changing, and this data should serve as a clear call to action for brands and organizations still relying on outdated password systems. Consumers are actively seeking out and prefer passwordless alternatives when available, and brands that fail to adapt are losing patience, money, and loyalty – especially among younger generations.
When consumers know about passkeys, they use them. Excitingly, 20% of the world’s top 100 websites and services already support passkeys. As the industry accelerates its efforts toward education and making deployment as simple as possible, we urge more brands to work with us to make passkeys available for consumers. The pace of passkey deployment and usage is set to accelerate even more in the next twelve months, and we are eager to help brands and consumers alike make the shift,” comments Andrew Shikiar, CEO at FIDO Alliance.
Notably, passkeys have seen strong adoption in high-growth, digitally advanced markets like China and India, which ranked top globally with 80% and 73% enablement, respectively. The UK followed close behind in third place, with adoption levels at 66%.
Younger consumers most attuned to online scams and AI threats
Consumer concerns about online security were also revealed to be high – and again, it is younger consumers most attuned to new threats.
Over half of consumers (53%) cited an increase in the number of suspicious messages they noticed in recent months, driven mostly by SMS (53%) and email (49%). Similarly, 51% detected an increase in the sophistication of threats and scam messages, likely driven by AI-enhanced attacks. Zooming in on demographic data suggests older generations are at greatest risk: 54% and 61% of 18-24 and 25-34-year-olds respectively noticed scams getting smarter, while just a third of 55-64-year-olds and 25% of 65+ said the same. Similarly, 20% of people over 55 said they were unsure about the impact AI has on their online security.
ENDS
Notes to editors
Research for the FIDO Alliance’s Online Authentication Barometer was conducted by Sapio Research among 10,000 consumers across the UK, France, Germany, US, Australia, Singapore, Japan, South Korea, India, and China.About FIDO Alliance
The FIDO (Fast IDentity Online) Alliance, www.fidoalliance.org, was formed in July 2012 to address the lack of interoperability among strong authentication technologies, and remedy the problems users face with creating and remembering multiple usernames and passwords. The FIDO Alliance is changing the nature of authentication with standards for simpler, stronger authentication that define an open, scalable, interoperable set of mechanisms that reduce reliance on passwords. FIDO Authentication is stronger, private, and easier to use when authenticating to online services.
Contact
press@fidoalliance.org
Global FIDO Alliance study reveals latest consumer trends and attitudes towards authentication methods and their perceived online security
Key findings Passkey familiarity growing – In the two years since passkeys were announced and made available for consumer use, passkey awareness has risen by 50%, from 39% familiar in 2022 to 57% in 2024. Password usage stagnates as consumers favor alternatives – The majority of those familiar with passkeys are enabling the technology to sign in. Meanwhile, despite passwords remaining the most common way for account sign-in, usage overall has declined as alternatives rise in availability. Waning password patience is costing sales and loyalty, especially among younger consumers – 42% of people have abandoned a purchase at least once in the past month because they could not remember their password. This increases to 50% for those aged 25- 34 versus just 17% for over 65s. Online scams and AI alarming consumers – Over half of consumers reported an increase in the number of suspicious messages they notice and an increase in scam sophistication, driven by AI. Younger generations are even more likely to agree, while older generations remain unsure how AI impacts their online security. Read the Full Report Read the Press ReleaseThe use of large language models (“LLMs”) such as ChatGPT can be a controversial topic. People have strong opinions about them for all kinds of reasons, from concerns around data privacy to the propagation of bias and inequality. The amount of energy used to train LLMs, and the amount of water used to cool data centres on which cloud-based versions run, is a particular concern. Our project with Friends of the Earth around Harnessing AI for environmental justice starts to unpack some of the environmental complexities of AI.
However, LLMs can be really useful as well, so in this post, we’re going to take a nuanced look at how various types of LLMs can be used together to improve both process and outcomes. We believe this is in keeping with the Spirit of WAO and our five focus areas, we believe in dialogue that leads to action. At WAO we’ve had a number of conversations around our personal use of AI and how we prefer to use it in our collective work.
WAO’s 5 focus areas, cc-by WAO Why use LLMs?Having used them regularly as part of our daily work for over 18 months at this point, we can say that using LLMs is useful for us in a number of concrete ways:
Speed — outsourcing ‘grunt’ work to LLMs means that we can spend time doing more creative work. A good example of this is using the optical character recognition (OCR) capabilities of some LLMs to recognise virtual sticky notes from project pre-mortem sessions we run with clients. It can then turn these into text, categorise them, and produce a spreadsheet. What would take perhaps an hour can be done in a couple of minutes. Situational perspectives — Asking LLMs to perform a ‘red/amber/green’ or RAG analysis of work can also be useful. It’s easy to get carried away with personal interests or to miss an important part of a client brief. By asking LLMs to check your work using the RAG format against what has been requested, we can check that our work fits with what’s required. It can also help us think through problems: on Doug’s personal blog he discussed how he used an LLM in the process of deciding not to go through with a house purchase. The AI helped calculate things such as cumulative risk of flooding over time, using formulae he might not have otherwise known how to use. Text synthesis — LLMs are excellent at synthesising information quickly and efficiently. This is true of the recently-launched Google NotebookLM which allows users to share up to 50 data sources, synthesise and then query them. One important point, however, is that we need to be thoughtful and focused in checking the accuracy of synthesis and summarisation work because LLM’s are also great at hallucinating. Three ways to approach LLMs sustainably Start localThere’s a lot to unpack around the use of AI and the environmental impacts. As a tech and environmental activist, I was hesitant about using some of the popular, browser-based LLMs for a variety of reasons. I recently wrote up a quick, Captain Planet themed overview of some of the climate crisis issues that this kind of technology contributes to. Together with WAO members, I’ve discussed and debated various privacy, bias and attribution issues that are baked into our technology choices.
Having started to use AI across a lot of contexts and looking into the impacts that the use of AI has, we made the decision to use locally installed models and tools to help reduce our climate impact whenever possible. Locally installed models might not be updated as frequently as ChatGPT, but the climate benefits are huge. The processing happens locally, as does the storage, so your queries and conversations aren’t stored in big data centres and they remain private to you.
Use multiple modelsThere are many generative AI models, some of which are smaller and some which are larger. A good place to see the diversity of these models is Hugging Face, a slightly awkwardly-named AI community. Here you can find the latest versions of models, along with datasets, and implementations of AI for various purposes.
It can be complicated to use the command line to install, configure, and run LLMs locally. However, as macOS users one of the tools we’ve been playing with is RecurseChat which simplifies the use of multiple, local AI models within an easy-to-use app. It’s straightforward to install and has a privacy policy that doesn’t keep us up at night worrying about where our data is going.
Conduct experimentsWhen AI image generation first hit the scene, we were excited about its possibilities. Together, we made art projects like Time’s Solitary Dance or the PsychOps for Mental Health Awareness month, which I made pairing AI generated images with the fabulous Remixer Machine.
While we certainly had fun playing with image generation, we now default to simply searching for images (or asking Bryan Mathers to draw them for us ;) There is an incredible emission cost to image generation, so we think carefully before asking generative AI to make something for us.
This graphic, from p.156 of the Artificial Intelligence Index Report 2024 shows how ‘image generation’ has a much greater impact on CO2 emissions than, for example, ‘text classification’ or ‘question answering’/Being environmentally conscious involves uncovering a great many complexities within our world. For example, when talking about the overall consumption of energy and water people tend to be quite alarmist. After a conversation with one of our gaming buddies about the annual energy use of a beer fridge, we calculated that it was the equivalent of prompting ChatGPT 4 over 2,200 times. Similarly, we calculated that you could prompt over 300 times for the same water usage as washing a car with a hose. While we recognise that using AI is an additional resource use, we try to put it in the context of wider consumption patterns.
Go forth and prompt togetherIf you are working alone and making use of LLMs, be sure and let people know what your “little robot friend” had to say. At WAO we let each other know when something we’re pasting into a chat is AI generated or if we’re about to prompt an LLM on something. Transparency is an important part of the process.
This AI Transparency statement was included in the recent CAST post about the Charity Digital AI Summit and is CC-BY Kester BrewinPart of AI literacy is being able to spot common foibles of LLMs. For example, we’ve come to dislike the words ‘delve’ and ‘foster’ because of their overuse by ChatGPT. Once you start noticing particular models tend towards certain words or phrases, you start seeing them everywhere. This is another reason transparency is so important.
Using a local model and screen sharing while coworking with your team is a great way to learn about AI together. You can use AI prompting and team conversation to bounce ideas around and find some productive ambiguity. Another benefit of this process is, of course, the laughter. AI responds with some pretty hilarious things sometimes, so why not improve your team dynamic by making fun of AI together.
Do you need help with this kind of thing? Get in touch with WAO!
Cooperating through the use of AI was originally published in We Are Open Co-op on Medium, where people are continuing the conversation by highlighting and responding to this story.
This topic was presented at IIWXXXIX Fall 2024 on October 29, 2024.
My name is Christopher Allen. In 2016, in advance of the ID2020 conference at the United Nations in New York, I wrote “The Path to Self-Sovereign Identity”, the article that described the ten precepts of Self-Sovereign Identity (SSI) and provided the name that defined our ecosystem. Eight and a half years later, I ask the SSI community to reflect on a difficult question, one that challenges the very foundation of SSI. Has our ecosystem strayed from our principles? Have we in the SSI community lost our moral compass?
I speak not only in response to the argument last year at IIW that “SSI didn’t work” but also to underscore a graver issue: that perhaps in our pursuit of mainstream adoption we’ve sacrificed the very principles that set SSI apart.
In a world increasingly threatened by authoritarian encroachments, with troubling examples growing by the day, we face the urgent need to protect freedom. Russia and China flout international borders and expand their influence through intimidation and repression. This rise of authoritarianism is undermining human freedom across Europe, especially in Hungary, with worrying echoes in Italy, the Netherlands, France, and across the EU. The number of refugees and stateless people are rising every year.
Closer to home, we also face disconcerting overreach, such as the Texas Attorney General’s request for DMV records of people who changed their names, an alarming step towards state-enabled discrimination against trans people. If Trump is re-elected, it’s chilling to imagine what might become acceptable next.
These are ominous signs of an escalating trend toward pervasive surveillance and control, not only over our private lives but also over the core of human dignity itself. SSI was meant to stand as a bastion against these encroachments, defending personal autonomy and integrity. It imagined a world where people, not institutions, controlled their identities, without fear of overreach, manipulation, or coercion.
However, instead of holding the line on decentralization, we’ve compromised by adopting watered-down specifications such as “did:web” that diminish the resilience and independence of self-sovereign identities.
Instead of insisting that Legally-Enabled Self-Sovereign (LESS) Identity enforce strict data minimization for both businesses and governments, we have allowed over-identification to proliferate. Further, we have failed to effectively counter the overblown threats posed by the so-called ‘Four Horsemen of the Digital Apocalypse’ — software pirates, organized crime, child pornographers, and terrorists — while also overlooking the far greater risk of tyranny and its potential for real harm against those powerless to defend themselves.
These compromises have left us vulnerable, eroding SSI’s unique value proposition compared to the centralized approaches of government. As a result, centralized government approoaches such as Apple and Google’s mDL/mDoc standards and federated corporate identity efforts are now winning the market battle against our SSI ecosystem.
This is the real reason the SSI ecosystem may be faltering: not due to market forces, but because we have failed to unequivocally commit to upholding the core values of decentralization and privacy and to resisting compromises that undermine human dignity. By doing so, we have become indistinguishable from the very systems we set out to disrupt.
The question is not whether SSI has “worked” in the marketplace. It is whether we have remained true to its ethical foundation and core principles, as envisioned from the start.
For another rebuttal to Riley Hughes’ Medium article, please see gabe’s “The Greatly Exaggerated Demise of SSI: A Rebuttal to Premature Eulogies”.
Webinar
Presented by: Stephen Deems, Director of Strategic Initiatives, Pittsburgh Supercomputing Center (PSC)
The essential role of Cyberinfrastructure (CI) in scientific research was discussed, highlighting the national resources available to support CI. Specifically, the NAIRR Pilot and the NSF-funded ACCESS Allocations program was explored showcasing how researchers and educators can take advantage of advanced computing systems—completely free of charge!
We encourage you to view the recording to learn more about how you can leverage advanced computing systems to enhance your work!
Please contact Forough Ghahramani (research@njedge.net) for additional information.
We are grateful for support from the National Science Foundation.
CC* Regional Networking: Connectivity through Regional Infrastructure for Scientific Partnerships, Innovation, and Education (CRISPIE) project (NSF OAC- NSF0311528)
Complete the Form Below to Access Webinar Recording [contact-form-7]The post Leveraging National Supercomputing Resources for Research and Education appeared first on NJEdge Inc.
Thanks to passkeys, you may not need to remember a password ever again.
Apple thinks 249 of my passwords need attention. Some of them have been reused. Some of them have been caught up in data breaches. Some are just bad passwords.
That’s why, for the past 11 years, a group called the FIDO Alliance has been working to kill passwords — or at least make us less reliant on them. FIDO, short for Fast IDentity Online, wants to make signing into your accounts not only more secure but also, as the name implies, faster and easier. Since its members include Amazon, Apple, Google, Meta, and other architects of our online experience, the FIDO Alliance is in a position to accomplish this, too.
Whether you’ve realized it or not, FIDO’s efforts have already transformed the way you sign into everything online. You may have noticed a few years ago, for instance, that a lot more sites started requiring something called multifactor authentication, which adds an extra step to the login process, like texting a code to your phone so the site can verify you are you. That was FIDO’s doing.
But after years of making logging in more difficult but more secure, the alliance recently began a major push to get platforms and people alike to adopt a technology that may just kill passwords altogether: passkeys.
Join us in Santiago, Chile for 3 hours of inspiration and co-creation around the tensions and challenges that cyberfeminism faces with the rise of emerging technologies.
The post Meet us in Chile for a discussion about emerging technologies and cyberfeminisms appeared first on The Engine Room.
By: Joni Brennan, President of DIACC-CCIAN
The recent FINTRAC Special Bulletin on money laundering and sanctions evasion in the legal profession is a clear reminder that safeguarding our financial systems—and upholding security—requires both vigilance and innovation. At the core of these efforts lies a powerful tool: digital verification of client identification. This technology is crucial in preventing the misuse of legal services for illicit activities like money laundering and sanctions evasion.
While most legal professionals work with integrity, the bulletin underscores that a few actors may unwittingly (or intentionally) facilitate financial crime. This crime threatens the integrity of our financial system and erodes the public’s trust in our legal system, where the majority are acting in good faith.
So, where does DIACC fit in? How does digital verification help mitigate these risks?
DIACC’s Role in Strengthening Digital Verification
Our work—anchored in the Pan-Canadian Trust Framework (PCTF)—is focused on enabling secure, privacy-respecting digital trust and verification services that support consumers, businesses, and governments. These services are essential in a world where digital interactions are the norm, and ensuring information authenticity is critical to preventing fraud and criminal activities.
The Law Society Profile: Supporting Legal Sector Security
One of the most exciting initiatives DIACC has undertaken is the development of the PCTF Law Society Profile. Working closely with the Federation of Law Societies of Canada (FLSC) and essential interested parties, this project highlights how the legal sector can play a pivotal role in combating money laundering. By establishing auditable criteria to mitigate risk and build assurance for client verification, the PCTF Law Society Profile enables legal professionals to confidently select trusted services that verify their clients’ identities, providing evidence of investment in alignment with stringent anti-money laundering (AML) and Know-Your-Customer (KYC) requirements.
This certification process helps legal professionals choose services one step ahead of fraudsters, ensuring transparency and security as they navigate high-value or high-risk transactions.
As the trusted list of DIACC PCTF Certified Providers grows, DIACC also launched its Member Services Directory, a directory of services offered by DIACC members, certified providers, and applicants. And, let’s face it, digital verification can be hard to explain, so we’ve launched a growing Big Book of Digital Trust Stories. Our Digital Trust Stories explain the mechanics and benefits in simple language to help people and organizations understand the benefits of digital trust and verification. These are just some of the tools DIACC develops to help ensure legal professionals can access the digital trust tools they need.
Addressing Risks in Financial and Legal Transactions
The findings from FINTRAC serve as a call to action. Legal professionals—particularly those involved in high-risk sectors like real estate or corporate structuring—are in a unique position of trust. Digital verification can play a crucial role in mitigating risks by adding layers of security that protect against misuse.
By adopting robust digital client verification solutions, legal professionals can:
Mitigate real estate and corporate transaction risks, ensuring transparency around beneficial ownership.DIACC and the Path Forward
The need to protect our financial integrity has never been more apparent. DIACC’s commitment, mission, and vision align directly with the challenges identified in the FINTRAC bulletin. Our role is to ensure that digital trust and verification tools are practical, accessible, and secure across industries, particularly in high-risk areas like the legal profession.
We are proud to partner with industries, governments, and professionals who share our vision for a future where transparency, security, and trust are central to every transaction.
DIACC is an open and transparent community of leaders dedicated to one issue: the responsible adoption of digital trust and verification practices.
Contact DIACC to collaborate and build a future where transparency, privacy, security, and trust are at the forefront of every transaction.
Kia ora,
October may bring Halloween, but I’m calling it “Crazy-month” with the whirlwind of activity – whether it’s the lead-up to Christmas or gearing up for the second half of the financial year ending in March. At Digital Identity NZ (DINZ), we’re delivering on the present while planning ahead for an even bigger and better Digital Trust Hui Taumata 2025 in August.
Thanks to the support of members AWS, Innovise, Worldline, and Xero we’ve submitted our input on the Consumer and Product Data bill and the open banking designation rules, ensuring our collective industry voices are heard.
It’s immensely satisfying to see the growing recognition of the vital role digital identity plays in eCommerce and secure online transactions across industries and government. DINZ’s involvement at the highest levels is paying off. Earlier this month, DINZ, our NZTech Group colleagues at FinTechNZ and RegTechNZ, along with DINZ members HGM, MERW, PaymentsNZ API Centre, and Sushlabs joined a roundtable with Minister Bayly. This followed our meeting with Minister Collins in August and focussed on fostering innovation in financial services. Workstreams are now underway, and we’ll report back to Minister Bayly in December.
With today’s latest news that the Government will introduce a single supervisor and a new funding model in a major overhaul of New Zealand’s Anti-Money Laundering and Countering Financing of Terrorism, if you have thoughts on the digital identity aspect of this discussion, we’d love to hear from you.
On the regulatory front, we’ve met with the Office of the Privacy Commissioner (OPC) to reinforce our well publicised views on the proposed biometrics code of practice, with media also showing interest in our biometrics mahi. Keep an eye out for details of our upcoming ‘Facial recognition in retail’ webinar on Tuesday 26 November, where we will discuss the benefits, ethical considerations, and privacy issues specific to New Zealand.
We’ve also resumed discussions with the Department of Internal Affairs (DIA) to support the implementation of the Digital Identity Services Trust Framework (DISTF). Raising community and industry awareness is key, which is why we’re pleased to support online educator InformDI and DINZ member NEC in offering members free online DISTF learning, which launched this month. If you’re a DINZ member and haven’t signed up yet, you can find more details on accessing the course here.
Speaking of education, we’re also teaming up with DINZ members Spark to provide AcademyEX’sMaster of Technological Futures students with a ‘Future of Identity’ online lecture soon.
For members, especially new ones – we encourage you to stand for election to our Executive Council to help guide our mahi and shape the future of digital identity in New Zealand.
Ngā mihi
Colin Wallis
Executive Director, Digital Identity NZ
Read the full news here: Industry engagement and future plans
SUBSCRIBE FOR MOREThe post Industry engagement and future plans | October Newsletter appeared first on Digital Identity New Zealand.
In this episode of “Unsafe at Any Click”, we talk to Julie Liddell, founding attorney of the EdTech Law Center. We gather insights from the legal side of privacy for students and parents as it relates to EdTech, and so much more.
The post “Unsafe at Any Click” – Episode 5 appeared first on Internet Safety Labs.
“There are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns — the ones we don’t know we don’t know.” (Donald Rumsfeld)
In my recent post on the strategic uses of ambiguity I explored how, when used thoughtfully, it can be a powerful tool in complex environments.
Then, in my Introduction to Systems Thinking series I explained that by reflecting on boundary judgements, understanding interrelationships, and engaging with multiple perspectives we can gain deeper insights into the systems we navigate.
In this follow-up, written in collaboration with Laura Hilliger, I aim to demonstrate how ambiguity can be used alongside systems thinking in a ‘playful’ way to shine light on our assumptions.
Ambiguity and System Boundaries This diagram is explained in the post On the strategic uses of ambiguityWhile ambiguity and systems thinking might seem like opposing concepts, when combined deliberately, productive ambiguity becomes a potent tool for exploring complex systems.
Productive Ambiguity — this represents ambiguity that is beneficial or intentional in generating positive outcomes because it helps people shift their view. For example, as part of a strategy where leaving things unsaid or open to interpretation can lead to more flexible and adaptable solutions. (On the Strategic Uses of Ambiguity)
Drawing boundaries around systems is essential for analysis. However, these boundaries often rest on assumptions that may go unexamined. Introducing productive ambiguity allows us to ‘play’ with the boundaries, questioning what we include or exclude and why. Productive ambiguity opens up space for new interpretations and challenges our assumptions in ways that are useful.
🛝🌳⛲ Example: Park Revitalisation ProjectLet’s say you’re working on a project to revitalise a local park. The initial assumption might be that only park users and local authorities are the relevant stakeholders. This is the boundary. However, by embracing productive ambiguity, you might ask: “Could nearby schools, local businesses, or community groups influence or benefit from this project in ways we haven’t considered?”
This open-ended question uncovers a wider boundary than had been included in the initial planning — such as schools using the park for outdoor learning or local businesses benefiting from increased foot traffic. The approach reveals hidden dynamics and helps develop a project that serves a broader and more diverse community.
Reflecting on boundary judgements isn’t just about defining the limits of a system. Rather, it’s about acknowledging that these limits can obscure important elements. By introducing productive ambiguity into our boundary judgements, we can challenge rigid definitions and explore what might be hidden just beyond them.
Productive ambiguity encourages us to ask, “What if our assumptions about this boundary are incomplete?” This approach allows us to ‘play’ with the system’s edges, revealing blind spots that a strict boundary might conceal.
Understanding Interrelationships Image CC BY-ND Visual Thinkery for WAOIn systems thinking, understanding the interrelationships within a system is essential because these connections are complex and often hard to identify.
Productive ambiguity can help us explore these interrelationships by allowing us to entertain multiple interpretations of how components might interact. This, in turn, leads to insights that wouldn’t have emerged through a linear approach.
🛝🌳⛲ Example: Park Revitalisation ProjectReturning to our park project, imagine that a local school starts using the park for outdoor activities. Asking open-ended questions helps bring in some productive ambiguity. What kinds of facilities or services might schoolchildren like to have? Let’s say they suggest an ice cream stand. By entertaining this idea, we begin to explore the wider interrelationships: What infrastructure would support this? We might need to consider water supply, electrical outlets, waste management, and accessibility for vendors and customers. As we dig deeper, we start to see how various systems — utilities, transport, local regulations — interconnect with the park in ways that weren’t immediately obvious.
By ‘playing’ with productive ambiguity, we can imagine alternative scenarios and question established cause-and-effect relationships. This exploration can uncover hidden feedback loops and leverage points — small changes that can have significant impacts.
For instance, something as simple as adding an ice cream stand might lead to greater foot traffic, which in turn could spark local businesses’ interest in park events or influence how the park is maintained. Productive ambiguity therefore helps us see beyond the obvious and consider connections that might influence the system in unexpected ways.
Engaging with Multiple Perspectives Image CC BY-ND Visual Thinkery for WAOBringing in diverse viewpoints is a crucial aspect of systems thinking, and productive ambiguity serves as a powerful tool for uncovering these perspectives. It opens the door to diverse interpretations, encouraging new perspectives and exposing areas where thinking may have become too narrow.
🛝🌳⛲ Example: Park Revitalisation ProjectFor instance, in the project to revitalise a local park, productive ambiguity could be introduced by presenting different possible uses for the space — such as turning it into a sports facility, a community garden, or a children’s play area — without making a definitive decision upfront. This uncertainty encourages people to imagine how these options might affect their lives or the community, drawing out diverse perspectives. As a result, the project might reveal multiple perspectives which have been overlooked, such as local businesses seeing the park as a venue for markets or under-represented groups desiring a space for cultural events. This approach helps encourage a richer, more inclusive and diverse vision for the park’s future.
By introducing some ambiguity into these discussions, we encourage others to challenge their assumptions and share insights that might not otherwise come forward. This collective ‘play’ with ideas can uncover areas of the system that need further attention. It also helps us avoid the trap of thinking that any one perspective is the only valid one.
Using Feedback Loops Image CC BY-ND Visual Thinkery for WAOAn essential part of systems thinking involves feedback loops: positive loops tend to reinforce certain behaviours, while negative loops work to maintain balance by counteracting them. Considering these loops alongside productive ambiguity can reveal how uncertainty either helps uncover hidden aspects of the system or, conversely, makes them more difficult to detect.
🛝🌳⛲ Example: Park Revitalisation ProjectAgain, using the park project as an example, the more ideas community members share about the park’s potential, the more excitement builds. This creates a positive feedback loop where new ideas inspire further participation. However, if the discussions remain too vague, some community members may grow frustrated with the lack of direction, creating a negative feedback loop that stalls the project. Here, productive ambiguity ensures that ambiguity leads to curiosity and exploration rather than confusion or paralysis, guiding the feedback loop toward uncovering blind spots rather than reinforcing them.
By understanding how feedback loops interact with productive ambiguity, we can steer systems toward more insightful outcomes. The key is to maintain just enough uncertainty to encourage exploration without allowing it to drift into confusion. This helps ensure that feedback loops support the uncovering of hidden system dynamics rather than obscuring them.
ConclusionBy integrating productive ambiguity into systems thinking practices, we enhance our ability to identify and address assumptions being made. Productive ambiguity allows us to ‘play’ with systems — testing boundaries, exploring interrelationships, and engaging with diverse perspectives. This helps reveal hidden elements, uncover predominant perspectives, and gain deeper insights.
Embracing productive ambiguity doesn’t mean creating unnecessary confusion. Instead, it acknowledges the complexity of systems and the value of multiple interpretations. The approach challenges fixed assumptions, encouraging us to develop more flexible, inclusive, and insightful strategies for navigating complex systems. Through productive ambiguity, we move beyond rigid boundaries and engage with the richness that systems — and the people within them — offer.
Do you need help with this kind of thing? Get in touch with WAO!
Finding Unexamined Assumptions Through Systems Thinking and Ambiguity was originally published in We Are Open Co-op on Medium, where people are continuing the conversation by highlighting and responding to this story.
With less than 10% of warehouses currently automated, the industry is on the verge of a major transformation.
As automation surges, one element will define success: the power of high-quality data.
In this episode, Ries Bouwman, Product Manager at KNAPP, and Gasper Gulotta, Director of Software Consultancy at KNAPP, join hosts Reid Jackson and Liz Sertl to discuss how accurate data is essential to the future of warehouse automation.
Reis and Gasper share examples of how poor data can disrupt automated systems, causing costly delays and inefficiencies. They emphasize that by improving data management, companies can not only prevent these issues but also unlock the full potential of automation.
Automation isn’t just about the machines—it’s about ensuring accurate, complete data that systems can rely on to function smoothly.
In this episode, you’ll learn:
Why data accuracy is critical for successful warehouse automation
The challenges and costs associated with incorrect or incomplete data
The role of GS1 standards in improving data quality across supply chains
Jump into the conversation:
(00:00) Introducing Next Level Supply Chain
(02:28) KNAPP and its journey in automation
(05:22) The importance of data quality in automation
(08:38) Connecting KiSoft to ERP systems
(13:23) Verifying data accuracy
(18:13) Raising industry standards for better data
(24:20) Bad data causing issues for warehouse automation
(30:39) Ries and Gaspar’s favorite tech
(34:32) Smarter data collection through AI and quantum computing
Connect with GS1 US:
Our website - www.gs1us.org
Connect with the guests:
Ries Bouwman on LinkedIn
Gasper Gulotta on LinkedIn
A pano from VRM Day in April 2015
VRM Days always happen the day before IIW starts, twice each year. Usually, we have about 50 registered and 30 showing up. (Some are online, though we’d rather have their bodies in the room.) For the VRM Day this coming Monday, we’re expecting more than 100 people. So, like Sheriff Brody said in Jaws, we need a bigger room.
And we have one. So that’s good. Logistics will be challenging, but we’re on top of them.
IIW is also close to sold out. Last I checked, there were just nine tickets left.
Here is copy from our Eventbrite page as it now stands:
The Main Thing
At this VRM Day, Ben Moskowitz, VP Innovation and Ginny Fahs, Director of Product R&D at Consumer Reports will lead discussion of some of their early R&D concepts around a new approach to customer service: one in which personal AI agents represent customers’ best interests.
This is CX (Customer Experience) re-imagined and re-implemented in ways that are no less real and human but far more intelligent, mutually informative, and useful than what all of us have experienced thus far in the Digital Age. CR is looking for feedback and collaboration as they move forward. They plan to participate in IIW as well.
As usual, everyone who wants to share what they’re working on in the VRM space will have time to present, discuss, and prep for IIW.
Schedule
Morning:
9am – Noon. Consumer Reports presentation and discussion (on the above)Lunch
Noon – 1:30 (Sports Page or Zareen’s (diagonal across the intersection)Afternoon
Adrian Gropper on HIE of One, Medical AI Assistant (MAIA) and personal AI in health care Joe Andrieu on the Digital Fiduciary Initiative Cryptid / KwaaiNet demo Iain Henderson on Data Pal Richard Whitt on GliaNet Alliance Customer Commons on IEEE P7012 Paul Trevithick (Mee.Foundation) on Private Advertising
Discussion on any or all of the above
Listing planned IIW sessions
Subject to change, of course.
Since we have a lot to cover, please be there at 9am sharp, and be back from lunch at 1:30.
Note that there are only three lunch places nearby. Cucina Venti is good but relatively expensive and service is slow. Sports Page is makes good sandwiches and has lots of picnic tables. It’s where most of us usually go. Haven’t tried Zareen’s, behind the Sports Page, where Sunny Bowl and other restaurants used to be. It has “familiar & innovative halal spins on Indian & Pakistani cooking.”
Biometrics are transforming the integration of our physical and digital worlds, especially in decentralised identity systems. Technologies like fingerprint and facial recognition not only enhance security but also provide a seamless way to verify identities while safeguarding user privacy. As New Zealand progresses with its Digital Identity Services Trust Framework (DISTF), these innovations empower individuals with greater control over their personal data.
In September, DINZ hosted a webinar featuring industry leaders including Steven Graham and Graeme Prentice from NEC, Roger Ford from Innovise NZ, Dr Vica Papp, from Digital Identity NZ and James Little, from DIA. They shared insights on the pivotal role of biometrics in shaping the future of digital identity.
For an in-depth look, check out the review by Biometrics Update here.
Watch RecordingThe post Biometrics Update: DINZ and NEC Webinar in Review appeared first on Digital Identity New Zealand.
Join us on November 14 for our Community Call to reflect on our first Deep Dive Week
The post Community Call: Learnings and reflections from our first ‘Deep Dive Week’ appeared first on The Engine Room.
21 October 2024
Digital Identity NZ (DINZ), through its Policy and Regulatory Subcommittee, has provided feedback to the Ministry of Business, Innovation and Employment (MBIE) for the proposed open banking regulations and standards under the Customer and Product Data Bill. This collaborative submission reflects insights from DINZ members across New Zealand’s digital identity sector, representing both large and small organisations.
DINZ fully supports the Bill’s goal to unlock the value of customer data, fostering competition and innovation. However, our submission highlights specific areas where the proposed rules could be enhanced to better achieve these objectives.
Empowering Customers through Open Banking
DINZ appreciates the Bill’s focus on giving customers control over their data, which can drive a more competitive and dynamic marketplace. However, we raised concerns around the prioritisation of the banking and electricity sectors, believing that a broader scope of competitive third-party providers is essential for success. Additionally, affordability is crucial, as the cost of third-party services could hinder widespread adoption.
Data Security and Privacy
Maintaining data security is a key focus for DINZ. While the Bill requires transparency from data holders and accredited requesters, we recommend aligning with the New Zealand Privacy Act 2020 to safeguard consumers’ data without unnecessarily disclosing sensitive information. This ensures robust protection and trust.
Learning from Australia’s Open Banking Journey
Reflecting on Australia’s slow uptake of open banking, DINZ cautions against over-reliance on the Digital Identity Services Trust Framework (DISTF) as a singular solution. A more holistic approach is needed to address identity, verification, and consent challenges in the context of open banking. Colin Wallis, Executive Director of Digital Identity NZ says:
“DINZ supports the general direction indicated in the discussion paper, however it considers that not enough attention is being directed to the reasons behind the slow take-up in Australia. Additionally unintended consequences may arise from its seemingly over reliance on the DISTF as the magic bullet to resolve all the digital identity, verification, attribute exchange and consent – as much as we would all like that.”
DINZ is committed to working with MBIE to ensure a secure, efficient, and inclusive open banking framework that benefits all Kiwis.
You can read the full submission here: DINZ_Submission_on_CPD_Open_banking_designation_rules_10_Oct_2024_Final-Signed.pdf (digitalidentity.nz)
For media inquiries or further information, please contact:
Email: info@digitalidentity.nz
Phone: + 64 9 394 9032
About Digital Identity NZ
Digital Identity NZ (DINZ) is a not-for-profit, membership-funded association with around 100 organisations from both the public and private sectors. Representing diverse industries and individuals, DINZ is the leading voice for digital identity in Aotearoa. As part of the New Zealand Tech Group (NZTech), we connect the digital identity community and actively influence policy and solutions. Our members play a crucial role in advancing digital identity across various sectors—from public-facing government services to open banking, account opening, and customer and product data. These initiatives rely on digital identity, working alongside AI, biometrics, and cloud technologies.
The post Shaping the Future of Open Banking in Aotearoa: DINZ Responds to Proposed Designation Regulations and Standards appeared first on Digital Identity New Zealand.
The FIDO Alliance is aware of passkey lock-in, and it’s actively working to address that:
With all relevant operating systems now natively supporting passkeys, companies have been increasingly adopting them as an alternative to passwords. Relying on passkeys minimizes the risk of getting hacked, as users don’t have access to their cryptographic keys, and intercepting them is significantly more challenging. However, those switching between different service providers may prefer traditional passwords, as there’s currently no easy way to import or export passkeys. To minimize the friction separating distinct platforms, the FIDO Alliance is working on a solution that makes moving passkeys between them a breeze.
The FIDO Alliance has published (via Neowin) a working draft encompassing specifications that would make moving passkeys between providers possible. When implemented, users would be able to securely import and export their passkeys, making switching platforms less challenging. Read more of the article.
Digital communities are collections of individual entities that are connected together. They can be modeled as graphs, with the individuals being nodes and their relationships being edges.
Traditionally, identity models have focused on the nodes, but in Musings of a Trust Architect: Edge Identifiers & Cliques, I suggested that both private keys and public-key identifiers could be based on the relational edges, and that when you combined a complete set of edges you could create a cryptographic clique, where the group was seen as an entity of its own, with the identities of any participants hidden through the use of a Schnorr-based signature.
My first look at cliques focused on the technical definition, which requires that cliques be “closed”, meaning that there’s a relationship between every pair in the group and that those pairwise edges form the clique identity among them.
However, creating closed graphs becomes increasingly difficult as the graph size grows. There are some alternatives which I discuss here: open cliques and fuzzy cliques. The entities forming a clique also don’t have to be people, as I discuss in cliques of devices.
Open CliquesCryptographic cliques don’t have to be fully closed. Open cliques are also possible. (In graph theory these technically are not called “cliques”, but I’m going to continue to use the term for cryptographic identifiers that are based on edges.)
While the concept of a fully connected clique provides clear value in graph theory, such structures can become computationally intensive, especially as the group size increases. Open cryptographic cliques, which are not completely interconnected, may then be used instead.
Open cliques support different sorts of modeling, for groups where not everyone is connected and where the relationships are fluid. They also allow for easier growth: a clique can organically add a new member when a single participant creates a relationship with them, without the need to define the new member’s relationship to everyone in the clique (especially as most of those relationships would not exist).
For example, Bob might not actually have a close or independent relationship with his mother-in-law, Anna, while Mary’s best friend from college, Beth, might join the clique when she stays with the family, despite the fact that she only has a real relationship with Mary. (However, more relationships, and thus edges, might develop over time!)
While open cliques may lack the complete interconnectedness of their closed counterparts, they offer a realistic representation of the evolving nature of dynamic social relationships. One of the main questions regarding them is when and how to recognize new edges as an open clique evolves, and thus when and how to rotate the clique’s overall keys.
Fuzzy CliquesAs discussed in the appendix to this article, there are currently two major Schnorr-based MPC signature systems that could be used as the foundation of cliques: FROST and MuSig2. Each comes with its own advantages and limitations, but one of the advantages of using FROST is that it allows for the creation of fuzzy cliques, thanks to its ability to create threshold signatures (with m of n
agreement required to sign where m≤n
).
This allows group decisions or representations to be based on a subset (threshold) of members rather than requiring unanimity, as would be required when using MuSig2 in its native form. Using thresholds to define group interactions adds a degree of “fuzziness” or flexibility to the representation of those groups and their actions, at the price of higher latency and the fact that the theoretical implications are not as well studied.
There’s one other catch: fuzzy cliques are the one situation where the Relationship Signature Paradigm can’t be used. Though we still create the relational edges, to allow any pair of participants in the clique to make joint decisions, the clique keys are created by the individual participants, not the edges, ensuring that we have thresholds of participants making decisions, not thresholds of edges (which would quickly become confusing!).
Even for a triadic clique, the privacy implications of using a threshold key to represent the clique are notable.
Imagine that the participants generated two FROST keys for the triadic clique, one that had a 2-of-3 threshold and one that had a 3-of-3 threshold. If every one agreed, they could all sign with their share fragments of the 3-of-3 private key, and anyone could compare it to the 3-of-3 public key and know that the group was in perfect consensus.
But what if you only required the consensus of two members of the group? After all, Joshua probably won’t be making a lot of decisions for a while. Theoretically, you could just sign with one of your relational edge keys, such as the Mary-Bob relational edge key. That demonstrates the consensus of two members of the clique and supports accountability: you know which two participants signed.
But, if you instead sign with the 2-of-3 threshold key for the clique you get to take advantage of the aggregatability that’s baked into Schnorr. With it, no one knows which two people signed (or indeed, if two or three people signed). They just know that at least the threshold of people within the group signed. It’s a powerful privacy enhancement that really shows off the power of fuzzy cliques.
Fuzzy cliques allow for real-world decision-making dynamics, where different sorts of decisions might require a single person’s agreement, a majority’s agreement, a super-majority’s agreement, and everyone’s agreement. This creates a model for fully decentralized decision-making that’s resilient and fault tolerant, all while supporting both individual privacy and group accountability (which still allowing for individual accountability using relational edges).
Cliques of DevicesThus far, I’ve largely presumed that relational edges and cryptographic cliques are created by people. But, that doesn’t have to be the case: independent nodes in a graph can be entities of any type, including devices.
In my first article, I touched upon the idea that a clique could define not just a group, but also a singular person’s identity. This could be done using devices. Imagine that a person has a few devices that together form the basis of his digital identity: a hub of information that contains his credentials; a biometric ring that verifies his physical identity, primarily to unlock that hub; and a coordinator that allows a clique-identity to communication with the network. The following diagram shows how our old friend Bob could be defined as an open clique including devices:
Using the clique-of-cliques model, this then might be the identity that’s linked in with Mary and Joshua to form their triadic nuclear-family clique:
Though these examples suggest a clique where devices and real people are mixed together, that’s not the only option. Another example might be a fuzzy clique made up of three automated factcheckers, which are all devices. Together, any two can issue a finding of “TRUE” or “FALSE”:
Again using the clique-of-cliques model, these fact checkers could then interact with other identities, such as Dan and Ty, who write together.
The Fact Checkers interact with the authors’ edge relationship (known by their joint pseudonym, “James”), to sign off on the validity of their work. Thanks to the aggregatability of Schnorr signatures, no one knows (or cares) that the Fact Checkers are three devices or the authors are two people!
ConclusionCliques offer a powerful new model for identity control (and more generally, for control of many sorts of digital assets). But, using closed cliques has drawbacks.
Two other models offer different utility:
Open Cliques allow for the modeling of more realistic social situations while simultaneously reducing compuational costs, but create new questions for theoretical understanding and in figuring how to maintain public and private keys for the clique. Fuzzy Cliques open up the possibilities for authorizations, agreements, and other decisions to be made by portions of a group rather than the group as whole, but they depend on either FROST or some other (theoretical) threshold signature system, and they disallow the creation of a clique using relational edges.In addition, cliques don’t have to be made up only of people:
Cliques of Devices show how cliques could also include AIs, oracles, fact checkers, hardware wallets, biometric rings, and other computerized programs, and that they could interact either as parts of cliques or as separate entities!These possibilities are just the beginning. I think that edge identifiers and cliques could be a powerful new tool for expanding the design of identies online.
How could you use them? How would you expand them? What would you like to see next?
Appendix: FROST & MuSigThere are currently two major Schnorr-based signature systems, FROST and MuSig2, both of which support Multi-Party Computation (MPC) signing.
FROST is a Schnorr-based multisig system that originated in a 2020 paper. As of 2024, it’s just coming into wide use thanks to projects such as ZF FROST and wallets such as Stack Wallet.
🟢 Possible efficiency improvements for larger cliques. 🟢 Supports thresholds (m of n). 🟢 Privacy for thresholds. 🛑 Limited accountability for thresholds. 🛑 Can’t build clique from edges if using thresholds. 🛑 More rounds for signing. 🟨 Allows Distributed Key Generation or Trusted Dealer Generation.MuSig2 is a Schnorr-based multisig system that dates back to 2020 (when MuSig2 was introduced) and before that 2018 (when MuSig1 was introduced). It’s been well-studied and is detailed in BIP 328, BIP 390, and BIP 373, providing strong integration with Bitcoin, especially since its recent merge into libsecp256k1.
🛑 No thresholds (n of n). 🟨 But can mimic thresholds with Taproot trees 🟢 Full accountability for signatures. 🟢 Fewer rounds for signing. 🟢 Can always build clique from edges.Two of the features of Schnorr-based signature systems that best support edge identifiers and cryptographic cliques are aggregation and MPC.
Aggregation. Schnorr signatures are aggregatable. They’re mathematically added together, producing a final multisig that’s the same size as an individual signature would be. As a result, signatures are indistiguishable: you don’t know how many people signed or who signed, simply that a signature is valid (or not). MPC. Multi Party Computation means that each participant has a secret (here, a key share), which they can use together without revealing that secret. It’s what allows individuals to jointly create an edge-identifier key and then for edges to jointly create a clique key.For more on Schnorr, see my Layperson’s Intro to Schnorr.
One of the drawbacks to passkeys is that currently there’s no way to import or export them between devices. The FIDO Alliance wants to change that.
It’s been around two years since passkeys came onto the scene, and the technology has come a long way in making the world a passwordless place. Yet, one feature that’s been absent is the ability to import or export passkeys between devices.
That is set to change, as the FIDO Alliance — the working group behind the technology — has published a draft specification for Credential Exchange Protocol (CXP) and Credential Exchange Format (CXF) formats that would not only work for the secure transferring of passkeys but also other forms of authentication.
Amazon says 175 million customers now use passkeys to log in:
Amazon has seen massive adoption of passkeys since the company quietly rolled them out a year ago, announcing today that over 175 million customers use the security feature.
“Today, we’re excited to share that more than 175 million customers have enabled passkeys on their Amazon accounts, allowing them to sign in six-times faster than they could otherwise,” says Amazon.
FIDO Alliance Working on Making Passkeys Portable Across Platforms:
Passkeys are an industry standard developed by the FIDO Alliance and the World Wide Web Consortium, and were integrated into Apple’s ecosystem with iOS 16, iPadOS 16.1, and macOS Ventura. They offer a more secure and convenient alternative to traditional passwords, allowing users to sign in to apps and websites in the same way they unlock their devices: With a fingerprint, a face scan, or a passcode. Passkeys are also resistant to online attacks like phishing, making them more secure than things like SMS one-time codes.
The draft specifications, called Credential Exchange Protocol (CXP) and Credential Exchange Format (CXF), will standardize the secure transfer of credentials across different providers. This addresses a current limitation where passkeys are often tied to specific ecosystems or password managers.
J.R. Rao of IBM and Akila Srinivasan of Anthropic Elected to the OASIS Open Project's TSC Leadership
Boston, MA, USA, 15 October 2024 – The Coalition for Secure AI (CoSAI), an OASIS Open Project, announced the formation of its Technical Steering Committee (TSC), which is responsible for the overall technical health and direction of the project. The TSC will advise the Project Governing Board (PGB), oversee releases, and manage the efforts of the project’s three initial workstreams along with their respective chairs, contributors, and maintainers. The TSC will promote initiatives that align with CoSAI’s mission to promote secure-by-design AI systems.
J.R. Rao from IBM and Akila Srinivasan from Anthropic have been elected co-chairs of the TSC. They will play a central role in steering the direction of the workstreams to ensure that they contribute to the overall goals of CoSAI. J.R. and Akila bring a wealth of experience and leadership from their respective organizations and will be instrumental in driving CoSAI’s technical direction.
“Securing AI, openly and collaboratively, will be critical for inspiring trust and enabling its acceptance by consumers and enterprises alike. As TSC co-chair, I am committed to guiding CoSAI’s three workstreams to establish best practices and frameworks that enhance the security of AI systems,” said J.R. Rao, TSC co-chair, of IBM.
“As co-chair of the CoSAI TSC, I’m committed to developing frameworks and controls that help us attest to the trustworthiness and integrity of AI models,” said Akila Srinivasan of Anthropic. “By fostering transparency and control, we empower organizations to build secure and responsible AI systems that protect users and pave the way for a safe and innovative future.”
The TSC has launched three workstreams aimed at advancing the security of AI systems and will oversee their efforts to establish best practices, governance, and frameworks for AI security:
Software Supply Chain Security for AI Systems:The governance structure for these workstreams ensures community collaboration, transparency, and alignment with CoSAI’s long-term goals. For more details on the governance model, visit the TSC and Workstream Governance documentation in GitHub.
About CoSAI:
CoSAI is an open source ecosystem of AI and security experts from industry-leading organizations dedicated to sharing best practices for secure AI deployment and collaborating on AI security research and product development. CoSAI operates under OASIS Open, the international standards and open source consortium.
Media inquiries: communications@oasis-open.org
The post Coalition for Secure AI Forms Technical Steering Committee to Advance AI Security Workstreams appeared first on OASIS Open.
Blockchain Commons’ work to create open, interoperable, and secure digital infrastructure continued in Q3 2024. Here were some of our main topics of interest:
Gordian Envelope Videos TPAC dCBOR & Unicode Seed Recovery BIP-85 SSKR for Ledger FROST FROST Implementers Meeting FROST in Gordian Stack Wallet Reference Upgrades Gordian SeedTool for iOS 1.6.2 Swift 6 Stack Upgrade More Envelope Signatures in Rust Developer Resources Stack Organization New Envelope Pages What’s Next? Gordian EnvelopeGordian Envelope, Blockchain Commons’ privacy-preserving data-interchange format for data at rest and (using GSTP) data on the wire, remains one of our top priorities. This quarter, we worked to make it more accessible and explored new cases for its usage.
Videos. We produced a trio of videos to offer an introduction to Gordian Envelope: a teaser, an overview, and a look at extensions. They’re must-watch viewing if you’re interested in adopting a data-storage and data-interchange format that actually focuses on privacy.
Envelope Teaser: Understanding Envelopes I: Understanding Envelopes II:Presentations at W3C TPAC (Technical Plenary and Advisory Committee): We’ve worked extensively on using Gordian Envelope to store digital assets such as seeds and SSKR shares. At TPAC 2024 this year, we presented some new thoughts on using various Envelope and CBOR alternatives in the rechartered DID group, where Christopher is an Invited Expert. We also discussed using Gordian Envelope for some specific DID use cases, which we hope to explore more. There’s more in the minutes and the slides.
dCBOR & Unicode. Gordian Envelope is built on dCBOR, our deterministic CBOR profile. In Q3, we updated our dCBOR Internet-Draft to v11. This was to incorporate Unicode Normalization Form C (NFC), to ensure that Unicode strings, used for all text in Gordian documents, will always be deterministic.
Seed RecoveryThe safe storage and recovery of seeds has long been a focus at Blockchain Commons, because it’s the heart of #SmartCustody. Our August 7th Gordian Developers Meeting focused on the topic and gave community members the ability to talk about their own efforts.
BIP-85. Aneesh Karve presented on BIP-85. This is a methodology for deriving many secrets from a single seed.
SSKR for Ledger. SSKR has been one of Blockchain Commons’ most successful releases because it allows developers to safely use Shamir’s Secret Sharing. Aido has incorporated SSKR into Ledger Seed Tool, which now allows you to shard your Ledger secrets yourself (without depending on Ledger Recovery and Ledger’s privacy-busting KYC-compliant partners).
Seed Recovery: BIP-85: SSKR for Ledger:Our Gordian Developer community is one of our most important resources to ensure that we’re doing work that meets the needs of wallet developers. Sign up for our Gordian Developer announcements to get the latest info on our upcoming meetings!
FROSTFROST is an up-and-coming multisig method that takes advantage of Schnorr-based signatures and Multi-Party Computations (MPCs) for key generation and signing. It’s an important new technology for creating keys that are more resilient and more secure. We’ve been supporting it for more than a year now.
FROST Implementers Meeting. Our second FROST Implementers Meeting occurred on September 18th. It gave people working on FROST specs, libraries, and cryptography the ability to talk about their most recent challenges. We’ve got a full record of the event, including videos, slides, summary, and transcript. It was great to bring the community together and plan for the future!
ChillDKG:FROST in Gordian. We’ve been doing our own work with FROST! Our Rust and Swift Gordian stacks are switching to fully BIP-340 compliant Schnorr signatures. We’ve also been experimenting with FROST support, to allow the FROST signing method using the Trusted Dealer model. We’re waiting on an updated release of the secp256k1 Rust crate so that we can publish our own Rust crates and Envelope-CLI, but we hope to have our full reference implementation available within the month.
Stack Wallet. We did some light review of the Stack Wallet this quarter, which is the first wallet we know of that incorporates FROST. We’d love to see a security review of its FROST design, but from what we can see from usage, it not only implements FROST, but the ability to change thresholds on the fly, which is one of FROST’s rea2lly amazing capabilities.
We will have a FROST Developer’s Meeting on December 4th that will provide advice & support for wallet developers who want to implement FROST. We’ve already scheduled Stack Wallet to give a presentation, since they’ve already done it!
Thanks to The Human Rights Foundation for their support of our FROST work in 2024.
Reference UpgradesOur reference apps and libraries suggest best-practices and offer examples on the uses of our specifications.
Gordian SeedTool for iOS 1.6.2. We released a minor update of Gordian Seed Tool for iOS that makes our card entropy compatible with other sources, that allows the export of SSKR Envelopes in UR format, and that resolves a few other incompatibilities.
Swift 6 Stack Upgrade. We also upgraded our entire Swift stack to Swift 6. This allows us to take advantage of the Swift 6 concurrency model, remove unnecessary dependencies on forked libraries, and convert the tests of some modules to the new Swift Testing framework. This work can already be found in our Swift libraries, but we’re waiting to release a new Seedtool for iOS until we have other new features to deploy.
More Envelope Signatures in Rust. Fully BIP-340 compatible signatures are just one of our expansions to our Envelope Rust reference libraries. You can now also do Ed25519 signing (again, as soon as we’re able to release our new crates).
Developer PagesOur Developer Pages are intended to help wallet developers to use our specifications (and other important standards like FROST). If there’s anything you’d like to see that isn’t on the pages, please let us know. This quarter, we made some major updates.
Stack Organization. Our biggest upgrade was a reorganization of the website to focus on the technology stacks that we offer. We have a core stack (which is our fundamental techs like dCBOR and Envelope), a user experience stack (which makes it easier for users to transmit and view data), and a crypto stack (which does the heavy lifting of things like sharding seeds). This is how it all fits together!
New Envelope Pages. Last quarter, we did work on the Gordian Sealed Transaction Protocol. This quarter, we incorporated that into our developer pages, with new content for GSTP and Encrypted State Continuation, plus updates to our look at Collaborative Seed Recovery
What’s Next?Our most exciting work planned for Q4 my be our December 4th FROST Implementers Meeting. If you are considering incorporating FROST into your own work, please be sure to sign up for our announcements-only Gordian Developers list to receive notifications on the meeting.
Or, our most exciting Q4 work may be our new work on cliques, which we think is an innovative new way to look at identity. We’ve released the first article on the topic, with a few more to come.
We’ll generally be talking with members of the identity and credentials community in Q4, including a presentation at the W3C Credentials Community Group, planned for October 22nd.
We’re also looking to roll out our work on FROST and Ed25519 signing, which just requires the official deployment of an updated secp256k1 Rust crate.
There are more projects under consideration! We’re thinking about producing a “Gordian Companion” to offer a reference for storing SSKR shares. We’re looking into more grants, as funding continues to be poor for many of our partners. (You can help by becoming a sponsor for us at any level!) And of course we’re looking forward to 2025!
TV screen courtesy Freepik.
Kia ora,
In December 2019, members elected the first Digital Identity NZ Executive Council. The Council is the governing group for the association, providing guidance and direction as we navigate the evolving world of digital identity in Aotearoa. Each Council member is elected for a two-year term, with elections held annually, and results notified at the Annual Meeting in December. As we approach the end of the year, it is time for nominations for the Council seats coming up for re-election.
Executive Council Nominations
There is now an opportunity to put yourself forward, or nominate someone else, for a role on the Digital Identity NZ Executive Council. This year we have vacancies for the following positions:
Corporate – Major (2 positions) Corporate – Other (2 positions) SME & Start-up (2 positions)The nominees for the above positions must be from a Digital Identity NZ member organisation (including government agencies) and belong to the same Digital Identity NZ Membership Group they are to represent on the Executive Council. If you are unsure of your organisation’s membership category, please email elections@digitalidentity.nz.
All nominations must be entered into the online form by 5pm, Monday 4 November 2024.
Nomination FormDigital Identity NZ Executive Council roles and responsibilities include:
Direct and oversee the business and affairs of Digital Identity NZ. Attend monthly Executive Council meetings, usually two hours in duration (video conferencing is available). Represent Digital Identity NZ at industry events and as part of delegations. Assist in managing and securing members for Digital Identity NZ. Participate in Digital Identity NZ working groups and projects. Where agreed by the Executive Council, act as a spokesperson for Digital Identity NZ on issues related to working groups or projects. Be a vocal advocate for Digital Identity NZ.Online Voting
Voting will take place online in advance of the meeting, with the results announced at the Annual Meeting. Please refer to the Charter for an outline of Executive Council membership and the election process. Each organisation has one vote, which is allocated to the primary contact of the member organisation.
Annual Meeting 2024
The Annual Meeting is scheduled for 10:00am on Thursday, 5 December 2024, and will be held via Zoom.
REGISTER NOWNotices and Remits
If you wish to propose any notices or motions to be considered at the Annual Meeting, please send them to elections@digitalidentity.nz by 5:00pm on the Thursday, 14 November 2024.
Key Dates:
14 October: Call for nominations for Executive Council representatives issued to members 4 November: Deadline for nominations to be received 11 November: List of nominees issued to Digital Identity voting members and electronic voting commences 14 November: Any proposed notices, motions, or remits to be advised to Digital Identity NZ 5 December: Annual Meeting, results of online voting announcedBackground:
From the beginning, we have asked that you consider electing a diverse group of members who reflect the diversity of the community we seek to support. We ask that you do so again this year. The power of that diversity continues to shine through in the new working groups this year, particularly as we consider the importance of Te Tiriti, equity, and inclusion in a well-functioning digital identity ecosystem.
The Council has identified several areas where diversity, along with expertise in the digital identity space, could help us better serve the community. Nominations from organisations involved in kaupapa Māori, civil liberties, and the business and service sectors are particularly encouraged. We also encourage suggestions from young people within your organisations, as their viewpoint is extremely valuable and relevant to the work we perform. As an NZTech Association, Digital Identity NZ adopts its Board Diversity and Inclusion Policy, which you can read here.
The post DINZ Executive Council Elections & Annual Meeting 2024 appeared first on Digital Identity New Zealand.
“Passkeys,” the secure authentication mechanism built to replace passwords, are getting more portable and easier for organizations to implement thanks to new initiatives the FIDO Alliance announced this month.
At the FIDO Alliance’s Authenticate Conference in Carlsbad, California, on Monday, October 14, researchers are announcing two projects that will make passkeys easier for organizations to offer—and easier for everyone to use. One is a new technical specification called Credential Exchange Protocol (CXP) that will make passkeys portable between digital ecosystems, a feature that users have increasingly demanded. The other is a website, called Passkey Central, where developers and system administrators can find resources like metrics and implementation guides that make it easier to add support for passkeys on existing digital platforms.
“To me, both announcements are part of the broader story of the industry working together to stop our dependence on passwords,” Andrew Shikiar, CEO of the FIDO Alliance, told WIRED ahead of Monday’s announcements. “And when it comes to CXP, we have all these companies who are fierce competitors willing to collaborate on credential exchange.”
Passkey Central provides leaders with education about passkeys and steps to implement them for consumer sign-ins
October 14, 2024 — Carlsbad, CA — The FIDO Alliance today announced Passkey Central, a new web resource where consumer service providers can learn more about why and how to implement passkeys for simpler and more secure sign-ins.
Passkeys, an easy-to-use and secure replacement for passwords, are already available for consumer services around the world including Adobe, Amazon, Apple, eBay, Google, Hyatt, Microsoft, Nintendo, NTT DOCOMO, PayPal, PlayStation, Shopify and TikTok. More than 13 billion user accounts can now leverage passkeys. Passkeys offer significant benefits to implementing organizations, including faster user sign-ins, higher sign-in success rates, reduced account takeovers, reduced costs associated with authentication, and lower cart abandonment. Passkey Central provides product leaders and architects with the information required to implement and realize similar benefits with passkeys.
Passkey Central provides visitors with actionable, data-driven content to discover, implement, and maintain passkeys for maximum benefits over time. The comprehensive resources on Passkey Central include:
Introduction to passkeys Business considerations and metrics Internal and external communication materials Implementation strategies & detailed roll-out guides UX & Design guidelines Troubleshooting And more implementation resources, such as glossary, figma kits, and accessibility guidanceService providers should go to passkeycentral.org to get started with passkeys.
“Passkeys are the simplest and most secure way for consumers to access the global connected economy,” said Andrew Shikiar, CEO of FIDO Alliance. “The early adoption of passkeys has been remarkable and it is now time to help more service providers break their dependence on passwords. Passkey Central will accelerate the use of passkeys by providing product leads and architects with independent and authoritative guidance on why and how to implement passkeys for their own website and services.”
A research-backed public resourceThe content for Passkey Central is based on several years of FIDO Alliance research, including subject matter expert interviews, focus groups and UX testing, to determine what guidance businesses need when implementing passkeys. Investment and participation from the following companies as Founding Underwriters enabled the underlying research, web and content development costs required to launch Passkey Central: Craig Newmark Philanthropies, Google, Trusona and Yubico.
“Our adversaries attack nations in cyberspace using techniques that are blocked by passkeys and related technologies. We need to do what we can to accelerate passkey adoption, and to help regular people understand that passkeys protect countries, and make their online lives a little easier.” – Craig Newmark, Founder and ISR, Craig Newmark Philanthropies
“Trusona is committed to revolutionizing the authentication experience for digital businesses, ensuring customers can sign up and sign in simply, swiftly, and securely. Passkey Central brings that mission to life with a new resource that will positively impact people’s digital lives today and in the future.” – Ori Eisen, CEO, Trusona
“Phishing attacks resulting from stolen login credentials is one of the greatest cybersecurity risks facing individuals and enterprises today. In order to achieve a phishing-resistant passwordless future, the solution is clear: prioritize education on passkey implementation and broad support for passkey authentication options globally. Passkey Central is a major step toward achieving this goal, and we look forward to working with the FIDO Alliance toward accelerating adoption of passkeys.” – Derek Hanson, VP, Standards and Alliances, Yubico
“The best way to accelerate passkey adoption is to give website owners and app owners the information they need to get oriented with the benefits of passkeys and guidance on how they can start deploying passkeys. FIDO’s Passkey Central will be a key resource that helps meet this need.” – Sam Srinivas, Product Management Director, Google and FIDO Board Rep for Google.
For more information about Passkey Central, visit passkeycentral.org.
About the FIDO AllianceThe FIDO (Fast IDentity Online) Alliance was formed in July 2012 to address the lack of interoperability among strong authentication technologies and remedy the problems users face with creating and remembering multiple usernames and passwords. The FIDO Alliance is changing the nature of authentication with standards for simpler, stronger authentication that define an open, scalable, interoperable set of mechanisms that reduce reliance on passwords. FIDO Authentication is stronger, private, and easier to use when authenticating to online services. For more information, visit www.fidoalliance.org.
ContactThe FIDO Alliance has published a working draft of a new set of specifications for secure credential exchange that, when standardized and implemented by credential providers, will enable users to securely move passkeys and all other credentials across providers. The specifications are the result of commitment and collaboration amongst members of the FIDO Alliance’s Credential Provider Special Interest Group including representatives from: 1Password, Apple, Bitwarden, Dashlane, Enpass, Google, Microsoft, NordPass, Okta, Samsung and SK Telecom.
Secure credential exchange is a focus for the FIDO Alliance because it can help further accelerate passkey adoption and enhance user experience. Today, more than 12 billion online accounts can be accessed with passkeys and the benefits are clear: sign-ins with passkeys reduce phishing and eliminate credential reuse while making sign-ins up to 75% faster, and 20% more successful than passwords or passwords plus a second factor like SMS OTP.
With this rising momentum, the FIDO Alliance is committed to enabling an open ecosystem, promoting user choice and reducing any technical barriers around passkeys. It is critical that users can choose the credential management platform they prefer, and switch credential providers securely and without burden. Until now, there has been no standard for the secure movement of credentials, and often the movement of passwords or other credentials has been done in the clear.
FIDO Alliance’s draft specifications – Credential Exchange Protocol (CXP) and Credential Exchange Format (CXF) – define a standard format for transferring credentials in a credential manager including passwords, passkeys and more to another provide in a manner that ensures transfer are not made in the clear and are secure by default.
Once standardized, these specifications will be open and available for credential providers to implement so their users can have a secure and easy experience when and if they choose to change providers.
The working draft specifications are open to community review and feedback; they are not yet intended for implementation as the specifications may change. Those interested can read the working drafts here, and provide feedback on the Alliance’s GitHub repo. Drafts are expected to be updated and published for public review often until the specifications are approved for implementation.
The FIDO Alliance extends a special thank you to its members in the Credential Provider Special Interest Group and its leads for driving and contributing to this important specification.
Building on the success of last year’s summit in Vietnam, the FIDO APAC Summit 2024 in Kuala Lumpur, Malaysia, once again brought together thought leaders, policymakers, technology innovators, and industry experts from across the Asia-Pacific region. With over 350 attendees from 15 countries—including Australia, China, France, Hong Kong, India, Indonesia, Japan, Malaysia, the Philippines, Singapore, South Korea, Taiwan, Thailand, the USA, and Vietnam—this year’s event served as a powerful platform for sharing knowledge, inspiring collaboration, and exploring the evolution of secure and convenient authentication technologies.
Watch the Recap VideoMalaysian Government Endorses Phishing-Resistant FIDO Authentication
In his keynote speech, CyberSecurity Malaysia Chief Executive Officer Datuk Amirudin Abdul Wahab emphasized, “Passwordless methods, such as FIDO-based biometric authentication, offer robust alternatives that are harder to compromise than traditional credentials. They also reduce the burden on users to remember complex passwords and mitigate the risks associated with credential theft.”
The National Agency of Cyber Security (NACSA) officially announced that they have become the first Malaysian government entity to adopt FIDO and passwordless technology. The local organizations classified as National Critical Information Infrastructure (NCII) are now using FIDO Security Keys for authentication and safeguarding applications and sensitive data.
The summit also received extensive media coverage, about 40 stories both pre- and post-event, featured in numerous esteemed publications. Some highlights include:
[The Edge] Over 80% of data breaches tied to weak passwords
[Business Today] Malaysian Businesses Should Ditch Passwords for Better Cybersecurity
[The Sun] Malaysia Advocates Passwordless Authentication to Enhance Cybersecurity
[BERNAMA TV] Malaysia Advocates Passwordless Authentication to Enhance Security
[Astro Awani] Malaysia Supports Passwordless Authentication to Enhance Cybersecurity
40 Speakers from Various Sectors Highlight Key Industry Trends
The Summit featured more than 40 speakers from sectors such as banking, government, telecom, enterprises, defense, eCommerce, solution vendors, online service providers, and manufacturers. Speakers represented leading organizations including Google, Lenovo, Samsung, ETDA Thailand, NTT Docomo, Mercari, Visa, SBI Bank, TikTok, iProov, Okta, TWCA, RSA, OneSpan, Thales, and VinCSS. One of the key themes of the 2024 Summit was the adoption of passkeys and the push towards achieving a passwordless experience across platforms. Here are some notable lessons shared:
Google: Demonstrated passkeys as the key to providing personalized experiences that users love. Cases from X, Amazon, Roblox, Kayak, WhatsApp, Zoho, and 1Password were shared. Roblox reported, “Passkeys are a significant security and usability upgrade for all of our users. In the six months since our launch, we have seen millions of users adopting passkeys to enjoy a simpler, faster, and more secure login experience.” Kayak noted a “50% reduction in average sign-in time with passkeys. With passkeys available on most devices, we’ve phased out traditional password logins and eliminated passwords from our servers.” 1Password highlighted that “in 2023, more than 1 million passkeys were created and saved by our users, and trial users who interact with passkey features are roughly 20% more likely to convert to paying customers.”
Samsung: Presented on passkeys on Galaxy mobile devices. Samsung launched the Passkey Provider Service at the end of 2023, providing a convenient user experience with the passkey as the default provider on Galaxy mobiles. Users can easily log in with fingerprint authentication and manage passkeys at a glance. Samsung ensures safe passkey synchronization across multiple devices logged into a Samsung account, including utilization with Samsung Knox Matrix. Statistics from the seven-month record of Samsung Passkey Provider include 7,672,861 cumulative registrations, 1,000,000 average new monthly registrations, and 850,000 average monthly authentications. Plans are in place to expand passkey usage for home appliance connectivity, such as TVs.
NTT Docomo: Highlighted the advantages of passkeys as an ideal authentication method—simple, frictionless user experience with biometric authentication, taking just 4-7 seconds compared to up to 30 seconds for SMS OTPs. They emphasized that passkeys are the only practical phishing-resistant authentication method.
Visa: Introduced Visa Payment Passkey for cardholder authentication in modern e-commerce. Traditional consumer authentication methods reduce fraud but often add friction, whereas biometric authentication with passkeys reduces both fraud and friction, leading to a 50% lower fraud rate.
TikTok: Reported success with passkeys, noting that over 100 million users registered within a year of implementation, with a 97% login success rate and a 17x faster login experience. There was also a 2% reduction in SMS OTP logins, as users who adopted passkeys chose them over other methods, improving app performance and reducing costs.
Workshops, Panel Discussions, and Networking Opportunities
This year’s Summit offered morning workshops on Passkeys and FDO (FIDO Device Onboard), allowing participants to delve deeper into implementing FIDO solutions. Attendees had the chance to work with FIDO experts to learn about integrating FIDO authentication into their services, understand technical specifications, and explore best practices. Experts also discussed the impact of emerging technologies like AI and post-quantum computing (PQC) on the authentication ecosystem while highlighting vulnerabilities related to human elements that can be addressed through implementing passkeys and FIDO’s efforts on future-proofing security.
Networking sessions, including a gala dinner, provided attendees a venue to relax and connect with peers from different parts of the world and sectors, fostering collaboration on developing solutions tailored to regional needs. Many participants enjoyed and respected the local culture while finding value in exchanging ideas and experiences about overcoming specific challenges in their respective sectors.
Celebrating Progress and Looking Forward
The FIDO APAC Summit 2024 showcased the significant progress towards convenient and secure FIDO-based passwordless authentication in the region. Through the collective efforts of governments, private sector leaders, and technology providers, the adoption of FIDO standards across the Asia-Pacific is accelerating, delivering stronger security and a seamless user experience.
The Asia-Pacific region is at the forefront of building a phishing-resistant, passwordless future, serving as an inspiration for other regions. The spirit of innovation and collaboration at the Summit reflects the dedication of all stakeholders to creating a secure and user-friendly digital landscape.
We extend our gratitude to all speakers, sponsors, participants, and members for making this year’s Summit a success. Together, we are shaping a more secure, passwordless future.
Proudly Sponsored by:
The post Verifiable Credentials: Trust and Truth in an AI-enabled Talent Acquisition Market appeared first on Velocity.
As supply chains become increasingly complex and stringent regulations like DSCSA and FSMA become more prevalent, understanding how to leverage EPCIS (Electronic Product Code Information Services) for granular visibility and efficient data management is more crucial than ever.
In this episode, hosts Reid Jackson and Liz Sertl are joined by Matt Andrews, Global Standards Director at GS1 US. Matt unpacks the fundamentals and applications of EPCIS, from its role in modeling supply chain processes to its transformative impact across industries like healthcare, food, retail, and logistics.
EPCIS can help your organization achieve unparalleled supply chain visibility, improve compliance, and drive competitive advantage.
In this episode, you’ll learn:
The intricacies of EPCIS (Electronic Product Code Information Services) and its universal application across industries for enhanced supply chain visibility, compliance, and efficiency.
How EPCIS can revolutionize inventory management with real-time data accuracy, from monitoring cycle counts to tracking product movement from back of house to point of sale.
How industries such as healthcare and food service leverage EPCIS to comply with regulations like DSCSA and FSMA 204, ensuring traceability down to the unique item level.
Jump into the Conversation:
(00:00) Introducing Next Level Supply Chain
(06:25) Benefits that organizations are seeing by leveraging EPCIS
(08:00) Full granular visibility, item-level tracking, inventory management
(13:54) How EPCIS can log events from manufacturing to sales
(17:03) Enhanced supply chain visibility through real-time EPCIS data
(18:28) Accessing claims compliance through advanced visibility
Connect with GS1 US:
Our website - www.gs1us.org
Connect with the guests:
Matt Andrews on LinkedIn
The Future of Education & Workforce track, sponsored by Jobs for the Future and the Digital Credentials Consortium, invites you to explore a future where education is accessible to all learners and acts as a true gateway to economic advancement.
Building on the work of the JFF Plugfest competitions, participants can use tooling and resources that give them a quick start, and the confidence that their submission will provide real value to learners and workers.
Jobs for the Future (JFF)Jobs for the Future is a nonprofit organization committed to transforming the US workforce and education systems to achieve equitable economic advancement for all. JFF drives change by designing innovative solutions, scaling best practices, influencing public policies, and investing in the development of a skilled workforce. JFF Labs is the innovation arm of the organization, focused on building the infrastructure for a skills-based talent marketplace, supporting an ecosystem of open standards-based interoperability that enables credential portability, and empowering individuals to use their data to access opportunity.
Digital Credentials Consortium (DCC)The Digital Credentials Consortium is a network of leading universities and institutions advancing the use and understanding of portable, verifiable digital academic credentials in higher education through open source technology development and leadership, research, and advocacy. Founded by MIT and partners worldwide, DCC encourages a learner-centered ecosystem where portable, verifiable digital credentials are universally recognized and easily shared. By fostering collaboration among academia, industry, and standards organizations, DCC is working towards a future where these credentials are more accessible, secure, and verifiable.
The ChallengesParticipants can choose from multiple challenges designed to push the boundaries of what's possible with VCs:
1. Verifiable Learner / Worker IDs and RecordsDemonstrate the transformative potential of user-controlled data in learning and professional experiences. Use VCs such as Student IDs, Employee IDs, and Employment History to showcase compelling use cases such as:
Applying for new job opportunities using proof of employment history Accessing platforms based on verified credentials Demonstrating essential skills through verifiable records Implementing selective disclosure principles to share only necessary information 2. Powerful New VC Tools a. Multiple Language SupportPromote cross-border mobility by enabling educational credentials to be meaningfully used internationally. Build a tool that constructs VCs in any language, with a special emphasis on non-Latin scripts, using the renderMethod attribute
b. Browser IntegrationEnhance convenience and usability by developing a browser plugin for displaying and verifying VCs. This challenge also requires the use of the VC renderMethod attribute.
3. Feature Enhancement a. Learner Credential WalletAdd support for the Learner Credential Wallet to use the VC renderMethod attribute, enabling rich displays of credentials within the application.
b. VerifierPlusEnhance VerifierPlus to support rich displays using the renderMethod attribute, including capabilities for PDF rendering.
4. Bonus Design Challenge: Establishing Credibility in Digital CredentialsExplore innovative ways organizations can integrate VCs into their processes to build trust among users. Design the equivalent of a "browser padlock" for Verifiable Credentials, helping users understand that verification checks are valid and trustworthy.
PrizesThis track offers a substantial prize pool totaling $15,000, distributed among top submissions that meet the challenge criteria and demonstrate exceptional innovation and impact.
Submission RequirementsAll submissions must adhere to the following criteria:
Open Source Licensing: Projects must be open source under the MIT license to promote transparency and collaboration. Technical Interoperability Standards: Submissions must comply with the technical standards used by the JFF Plugfest, including: Credential Format: Open Badges 3.0 (using Verifiable Credential format) Issuing Credentials: Utilize VC API with CHAPI or OpenID for Verifiable Credential Issuance Exchanging Credentials: Use CHAPI, OpenID for Verifiable Presentations, or WACI-DIDComm Interop ProfileParticipants are encouraged to build upon tools provided by the Plugfest, such as VC Playground, CHAPI, and the Digital Credentials Consortium Wallet.
Why Participate?By joining this track, you have the opportunity to:
Contribute to solutions that can have a real-world impact on education and workforce development Collaborate with leading organizations in the decentralized identity space Showcase your innovative ideas and technical skills to a global audience Be part of a movement that is shaping the future of learning and workSharon Leu, Executive in Residence at JFF Labs, highlights the importance of the challenges: “The challenges that we proposed are critical to the infrastructure that will help learners and jobseekers find meaningful opportunities at all stages of their learning and employment journey. We are excited for this community to work together to create the tools that will give people control of their data in wallets, data models that allow them to express their different identities as workers and learners, multi-language support for verifiable credentials, and a seamless verification experience for relying parties with minimal technology capacity.”
“The Digital Credentials Consortium advocates for open source, open standards, and open community to foster transparency, collaboration, and innovation in the development of digital credentialing systems,” adds Kerri Lemoie, Director at MIT Digital Credentials Consortium. “Hackathons foster creativity and collaboration, bringing together diverse minds to solve real-world problems in a short amount of time. Through experimentation, skill development, and community building we hope the participants are inspired to make tools and technologies that will enhance trust of portable, verifiable digital credentials that democratize access to educational achievements and skills verification.”
Kim Hamilton Duffy, Executive Director of DIF, emphasizes the transformative potential of this track: "Education and workforce development have the power to change lives. This challenge embodies the core reason I became involved in decentralized identity – to ensure people have control over credentials that are portable, verifiable, and meaningful across borders and contexts. I'm thrilled to see the innovative solutions our participants will create to address these critical issues."
Join Us in Revolutionizing Education and Workforce DevelopmentWhether you're a seasoned developer in the decentralized identity space or new to the field, your participation can make a significant difference. Together, we can build the next generation of tools that will empower learners and workers worldwide.
Ready to Take on the Challenge?Register for the DIF Hackathon 2024 and select the Future of Education & Workforce track. Let's collaborate to create a more accessible and verifiable future for education and career advancement.
Register now: https://difhackathon2024.devpost.com/ Join our informational session: https://www.eventbrite.com/e/education-and-workforce-track-overview-tickets-1029330524307 Read details about the challenges, prizes, and submission requirements: https://identity.foundation/hackathon-2024/docs/sponsors/edu/ Join the discussion on the DIF Hackathon discord: https://discord.gg/WXPzWvBCjDJoin us in shaping the future of education and work through innovation and collaboration.
The DIF Hackathon 2024 is in full swing, and we’ve got a fantastic lineup of challenges waiting for you! From reusable identity to revolutionizing digital identity in education, this is your chance to innovate, compete for amazing prizes, and help shape the future of decentralized identity. Below is the full lineup of sessions for the coming week!
🌟 ONT Login Challenge – Unlock Seamless Authentication!📅 Date: Tuesday, October 8 | 8 AM PST / 5 PM CEST Ontology is bringing you the ONT Login challenge! Learn how to integrate a decentralized universal authentication component for secure, reusable identity in Web2 and Web3 applications. Demonstrate how ONT Login can transform your app’s login experience while keeping user privacy intact.
💰 Prizes: 1st Place: $1000 USD | 2nd Place: $500 USD | 3rd Place: $300 USD🔗 Register Now
💥 tbDEX Challenge – Power Up Payments with Known Customer Credentials!📅 Date: Tuesday, October 8 | 9 AM PST / 6 PM CEST Get ready to dive into the payments world with the tbDEX challenge! As a business or developer, you’ll use the Web5 SDK to streamline KYC processes with Known Customer Credentials (KCC). Join this session to unlock a future where seamless decentralized identity enhances payments.
💰 Prizes: 1st Place: $2500 USD | 2nd Place: $1500 USD | 3rd Place: $1000 USD🔗 Register Now
🚀 How to Resolve DIDs and Verify VCs for Free with VIDOS📅 Date: Tuesday, October 8 | 10 AM PST / 7 PM CEST This session will unlock the power of DIDs and Verifiable Credentials in recruitment and reusable identity. Explore two dynamic challenges to develop solutions that make identity verification more secure and efficient for real-world applications.
💡 Challenge 1: Employer Portal Using DIDs and VCs (Education Track)Build a proof-of-concept that allows recruiters to verify and onboard candidates securely using verifiable credentials.
💡 Challenge 2: VC Interoperability (Reusable ID Track)Create a solution that demonstrates VC interoperability across scenarios, like using a passport for travel or age-gated entry.
💰 Prizes: Total prize pool of $4,500 USD🔗 Register Now
🚀 Join the Future of Education and Economic Advancement!📅 Date: Wednesday, October 9 | 9 AM PST / 6 PM CEST
This track invites innovators to develop solutions that make education and economic opportunities more accessible through decentralized identity. Dive into challenges that use Verifiable Credentials (VCs) for educational records, employment history, and more.
💡 Challenge C1: Verifiable Learner/Worker IDsBuild VCs representing Student IDs, Employee IDs, and Employment History. Show how they can be used for job applications, skill verification, and more.
💡 Challenge C2: Build Tools for Global UseDevelop tools that support VCs across borders, languages, and digital platforms, creating a more universal decentralized identity solution.
💰 Prizes: Total prize pool of $15,000 USD🔗 Register Now
🔑 Crossmint's Reusable Identity Challenge!
📅 Date: Wednesday, October 9 | 10 AM PST / 7 PM CST
Unlock the potential of reusable digital identities to simplify KYC, KYB, and age verification processes. Use Crossmint’s Verifiable Credentials API to build secure, scalable identity solutions for various platforms. Let's tackle identity verification and compliance with a focus on privacy and usability!
💰 Prizes:
1st Place: $800 USD + $2,000 in Crossmint credits
2nd Place: $500 USD + $1,000 in Crossmint credits
3rd Place: $200 USD + $500 in Crossmint credits
🏨 Revolutionize Hotel Check-Ins with Verifiable Credentials (VC)!📅 Date: Thursday, October 10 | 9 AM PST / 6 PM CEST Imagine a world where hotel check-ins are seamless and secure. This challenge, led by Mateo Manfredi, Senior Full Stack Developer at Extrimian, invites you to build a privacy-focused check-in system using government-issued Verifiable Credentials. Let’s reimagine how hotels handle guest data and create a safe, smooth experience.
💰 Prizes: 1st Place: $1000 USD + $1800 in Extrimian Platform credits🔗
🤖 Harness the Power of Decentralized Identity for Verifiable AI📅 Date: Thursday, October 10 | 10 AM PST / 7 PM CEST In the age of AI, trust is more important than ever. This challenge, led by Ankur Banerjee, Co-founder and CTO of cheqd, invites you to create solutions that ensure AI-generated content is trustworthy and verifiable using decentralized identity and Verifiable Credentials.
💰 Prizes: Total prize pool of $7,500 USD in CHEQ tokens🔗 Register Now
Don’t miss your chance to innovate, compete, and win big at the DIF Hackathon 2024! Whether you're passionate about education, AI, payments, or hospitality, there’s a challenge for you. Let’s build the future of decentralized identity together.
Best regards,
The DIF Hackathon Team
Since the mid-1990s, I’ve been advocating for the creation of secure digital infrastructures that protect human rights, civil liberties, and human dignity online. My mission has always been to decentralize power and give individuals control over their digital lives, from my early work co-authoring the TLS standard to my recent efforts supporting DIDs and Verifiable Credentials.
We now stand at another crossroads in digital identity. The current paradigm, where an individual’s private key is the cornerstone of their identity, has served us well but it also has significant limitations—especially as we move toward a more interconnected, collaborative digital world. Fortunately, advances in cryptography allow us to rethink single-key self-sovereign identity systems, suggesting the possibility for new options such as edge identifiers and cryptographic cliques.
The Single Signature ParadigmIdentity management has long centered on the use of single-signature cryptographic keys. Operating on a straightforward principle, this “Single Signature Paradigm” requires the possession of a unique private key for cryptographic signatures, allowing actions such as authentication, data encryption, and transaction validation.
The security of this model hinges on the confidentiality of the private key: a compromise of the key means a compromise of security. To reduce this threat, standards often require private keys be stored in specialized hardware, providing a fortified environment. This model is the cornerstone of security strategies endorsed and required by entities such as the National Institute of Standards and Technology (NIST), European Union government standards, and various international standards groups such as the Internet Engineering Task Force (IETF) and the World Wide Web Consortium (W3C).
There has been very limited success in strengthening this fundamental methodology through protocols such as key rotation. Meanwhile, the Single Signature Paradigm has many flaws, the most serious of which are Single Point of Compromise (where a key can be stolen) or Single Point of Failure (where a key can be lost). If anything, these problems are worsening, as demonstrated by recent side-channel attacks that can extract keys from older hardware. Other issues include scalability limitations, hardware dependency, operational inflexibility, and numerous legal, compliance, and regulatory issues.
There are fundamental limits to what can be achieved within the confines of a Single Signature Paradigm, making the need for evolution clear.
The Keys to Self-Sovereign IdentityThe Single Signature Paradigm is problematic for many use cases surrounding digital assets, but particularly so for the management of digital identities, because identities are both central to our digital experience and largely irreplaceable. You can’t just create a new identity to replace a compromised one without losing credentials and connections alike.
When I first conceived of my ideas for the personal control of digital identity, known today as self-sovereign identity, I didn’t want to be limited by the Single Signature Paradigm. Instead, I modeled self-sovereign identity to be an identity that existed in a social context, not an isolated identity defined by singular keys. I wrote some on this in The Origins of Self-Sovereign Identity.
One of the key principles of living systems theory is the concept of the membrane. This is not just a physical barrier but a selective boundary that controls the exchange of energy, matter, and information between the system and its environment. The membrane allows certain things to pass through while restricting others, thereby maintaining the system’s integrity and autonomy. It’s a delicate balancing act: the system must allow enough interaction with the environment to sustain itself while ensuring that it isn’t overwhelmed by external forces.
…
Though I meant for it to be something that would protect the individual, self-sovereignty doesn’t mean that you are in complete control. It simply defines the borders within which you can make decisions and outside of which you negotiate with others as peers, not as a petitioner.
Implementing practical solutions that encapsulate this interconnectedness has historically been challenging due to the dominance of the Single Signature Paradigm. This has led to self-sovereign identity systems that actually adhere to the Single Signature Paradigm, which in turn causes the to overemphasize individualism, which was not my intent.
It’s not the only way.
Relational Edge IdentityLiving systems theory suggests that identity isn’t just about oneself, but about one’s connections to the rest of society.
Consider the process of a child’s identity formation. They may be named “Joshua” upon birth, suggesting a unique, nodal form of identity. But, there are many Joshuas in the world. To truly define the child’s identity requires linked local names (or pet names) that define relationships. The father and mother say “my child”, attesting to the relationship between each of them and the child. A sibling says, “My brother’s child” and a grandparent says “my grandchild”.
Though unidirectional descriptors are useful to help identify someone, each link is actually bidirectional, creating an edge between two individual nodes of identity:
At this point we must ask: does the node really define identity or is it the edges? The most complete answer is probably that an identity is defined by an aggregation of edges sufficient to identify within the current graph context: “Joshua, who is filially linked with Mary, who is filially linked with Anna.”
Relational Edge KeysWe can model the interconnectedness of edge-based relationships in an identity system by using Schnorr-based aggregatable multisig systems that support Multi-Party Computing (MPC), such as MuSig2 or FROST (see the Appendix in the next article for more on the technology and the differences between the two systems). Schnorr-based systems are an excellent match for edge identity because their peer-based key construction technique matches the peer-based model of an identity graph: two users come together to create a joint private key.
To create a relational edge key, the two identities (nodes) connected by an edge each generate a private commitment. These commitments are combined in a cryptographic ceremony to form the edge’s private key. The associated public key then effectively becomes an identifier for this two-person group, indiscernible from a single user’s public key thanks to Schnorr.
Leveraging the Multi-Party Computation (MPC) of MuSig2 or FROST allows for the creation of a private key that doesn’t exist on a single device. It exists only in a distributed cryptographic construct, colloquially called a “fog”. Through unanimous consent, users can use this “fog” to sign collectively, allowing (even requiring) joint agreement for joint actions.
This relational-edge identity model begins to resolve the issues with current self-sovereign identity models by recognizing identity as being about more than just a single self-sovereign person. It also offers substantial benefits including better security, trust, resilience, and verification due to full keys existing only in this distributed cryptographic “fog”. Finally, it allows relationships to dynamically grow and change over time through the addition or removal of edges in a graph.
Clique IdentityEdge identity is just the first step in creating a new model for identity that recognizes tthat personal digital identity is founded in relationships. The next step is to expand pairwise relationships by forming a clique, specifically a triadic clique.
A clique in graph theory is “a fully connected subgraph where every node is adjacent to every other node.” Thus, in a complete graph, no node remains isolated; each is an integral part of an interconnected network. This concept is core to understanding the transition from simple pairwise relationships to more complex, interconnected group dynamics.
In our example, there is an obvious triadic clique: the nuclear family of Mary, Bob, and Joshua.
Remember that the term “nuclear family” comes from the word “nucleus”.That’s a great metaphor for a tight, strongly connected group of this type. A triadic clique fosters strong social cohesion and supports a robust, tightly-knit network.
Cryptographically, we form a triadic clique by generating a relational edge key for each pair of participants in the group. This represents the pair’s joint decision-making capability. Once these pairwise connections are in place, the trio of edges participates in a cryptographic ceremony to create a shared private key for the whole group, which in turn creates a clique identifier: the public key. This identifier represents not just an individual or a pair but the collective identity of the entire triadic group (and, once more, their decision-making capability).
Although my examples so far suggest that nodes in a clique are all people, that doesn’t have to be the case: I’ll talk about cliques of devices as one of three variations of this basic formula in my next article.
Why Cliques of Edges?As noted, a clique is formed by the pairwise edges jointly creating a key, not by the original participants doing so. There are a number of advantages to this.
Most importantly, it builds on the concept of identity being formed by relationships. Call it the Relationship Signature Paradigm (or the Edge Signature Paradigm). We’re saying that a group is defined not by the individuals, but by the relationships between the individuals. This is a powerful new concept that has applicability at all levels of identity work.
Individually, we might use the Relationship Signature Paradigm to create an individual identity based on edge-based relationships. My relationship to my friends, my relationship to my company, my relationship to my coworkers, my verifiable credentials (which are themselves relationships between myself and other entities), and my relationship to my published works together define the “clique” that is me. Crucially, this identity is built upon the relationship with other participants, not the participants themselves.
At a higher-level, we can also use this paradigm to form a clique of cliques, where each member is not a participant or even an edge, but instead a clique itself! Because we already recognized cliques as being formed by relational groups when we defined a first-order clique as a collection of edges, we can similarly define a clique as a collection of cliques (or even a collection of edges and cliques), creating a fully recursive paradigm for identity.
There is one clique-based design where the Relationship Signature Paradigm can’t be used: fuzzy cliques, which is another variation of clique identity. But more on that in the next article.
Higher Order GraphsThere is no reason to limit cryptographic cliques to three edges. However, the larger the group is, the harder it is to close the graph: as the number of nodes (n) in a clique increases, the number of edges grows following the formula (n*n-1)/2
, which is the number of unique edges possible between n
nodes.
A “4-Clique” (or K4), for example, is a complete graph comprising 4 nodes, where each node is interconnected with every other node, resulting in a total of (4*3)/2 = 6
edges.
This pattern continues with larger cliques:
K5 =(5*4)/2 = 10
edges;
K6 = (6*5)/2 = 15
edges;
K7 = (7*6)/2 = 21
edges; etc.
In practice, as the number of nodes in a clique increases, the complexity of forming and maintaining these fully connected networks also escalates: each additional connection requires its own key-creation ceremony with every existing member of the graph.
Complete graphs, or closed cliques, have valuable applications across various disciplines, from computer science to anthropology, but they aren’t the only solution for cryptographic cliques. I’ll talk more about the alternative of open cliques as another variation of the clique identity model in my follow-up article next week.
ConclusionThe Single Signature Paradigm has been at the heart of the digital world since the start. It’s always had its limitations, but those limitations are growing even more problematic with the rise of digital identity.
Relational edge keys and closed cliques offer a next step, modeling how identity is actually based on relationships and that many social decisions are made through the edges defined by those relationships.
Other advantages of using clique-based keys and identities include:
Decentralized Identity Management. Peer-based edge and clique identifiers are created collaboratively, bypassing third-party involvement, thus supporting self-sovereign control and improving anonymity. Identity Validation. Peer-based identifiers help to authenticate social identities, creating trust. Resilience Against Single Points of Failure: Distributing control among multiple parties in a clique guards against single points of failure. Secure Group Decision Making. Relations or groups can securely and irrevocably made decisions together. Enhanced Privacy in Group Interactions. Aggregatable Schnorr-based signatures keep the identities of the members of a relationship or a clique private.Cliques can be quite useful for a number of specific fields:
Blockchains. The use of aggregatable signatures creates smaller transactions on blockchains. Collaborative Projects. Collaborative projects and joint ventures can use clique keys to authenticate shared resource usage and other decisions. Financial Fields. Dual-key control is often required in financial fields, and that’s an implicit element of relational edge keys. Internet of Things (IoT) & Other Smart Networks. Relational edge keys can ensure secure and efficient communication among diverse devices that have paired together. Medicine & Other Sensitive Data. When data is sensitive, cliques can ensure all parties have agreed to the data sharing terms, maintaining both security and collaboration integrity.By leveraging cryptographic cliques for group identification and decision-making, we open a wide array of opportunities. These are just the beginning: open cliques, fuzzy cliques, and cliques of devices can offer even more opportunities, as I discuss in my next article in this series (which also talks a little bit about the cryptography behind this).
Public review ends November 7th
OASIS and the Energy Interoperability TC are pleased to announce that Energy Interoperation Common Transactive Services (CTS) Version 1.0 is now available for public review and comment.
Common Transactive Services (CTS) permits energy consumers and producers to interact through energy markets by simplifying actor interaction with any market. CTS is a streamlined and simplified profile of the OASIS Energy Interoperation (EI) specification, which describes an information and communication model to coordinate the exchange of energy between any two Parties that consume or supply energy, such as energy suppliers and customers, markets and service providers.
The documents and all related files are available here:
Energy Interoperation Common Transactive Services (CTS) Version 1.0
Committee Specification Draft 04
09 September 2024
Editable Source: https://docs.oasis-open.org/energyinterop/ei-cts/v1.0/csd04/ei-cts-v1.0-csd04.pdf (Authoritative)
HTML: https://docs.oasis-open.org/energyinterop/ei-cts/v1.0/csd04/ei-cts-v1.0-csd04.html
DOCX: https://docs.oasis-open.org/energyinterop/ei-cts/v1.0/csd04/ei-cts-v1.0-csd04.docx
For your convenience, OASIS provides a complete package of the specification document and any related files in a ZIP distribution file. You can download the ZIP file at:
https://docs.oasis-open.org/energyinterop/ei-cts/v1.0/csd04/ei-cts-v1.0-csd04.zip
How to Provide Feedback
OASIS and the Energy Interoperability TC value your feedback. We solicit input from developers, users and others, whether OASIS members or not, for the sake of improving the interoperability and quality of its technical work.
The public review starts October 7, 2024 at 00:00 UTC and ends November 7, 2024 at 23:59 UTC.
Comments from TC members should be sent directly to the TC’s mailing list. Comments may be submitted to the project by any other person through the use of the project’s Comment Facility: https://groups.oasis-open.org/communities/community-home?CommunityKey=70a647c6-d0e6-434c-8b30-018dce25fd35
Comments submitted for this work by non-members are publicly archived and can be viewed by using the link above and clicking the “Discussions” tab.
Please note, you must log in or create a free account to see the material. Please contact the TC Administrator (tc-admin@oasis-open.org) if you have any questions regarding how to submit a comment.
All comments submitted to OASIS are subject to the OASIS Feedback License, which ensures that the feedback you provide carries the same obligations at least as the obligations of the TC members. In connection with this public review, we call your attention to the OASIS IPR Policy [1] applicable especially [2] to the work of this technical committee. All members of the TC should be familiar with this document, which may create obligations regarding the disclosure and availability of a member’s patent, copyright, trademark and license rights that read on an approved OASIS specification.
OASIS invites any persons who know of any such claims to disclose these if they may be essential to the implementation of the above specification, so that notice of them may be posted to the notice page for this TC’s work.
Additional information about the specification and the Energy Interoperability TC’s can be found at the TC’s public home page: https://www.oasis-open.org/committees/energyinterop/
Additional references:
[1] https://www.oasis-open.org/policies-guidelines/ipr/
[2] https://www.oasis-open.org/committees/energyinterop/ipr.php
The post Invitation to Comment – Energy Interop (CTS) Version 1.0 appeared first on OASIS Open.
CAP Standard Has Transformed Emergency Communication and Continues to Save Lives
Boston, MA – 7 October 2024 – This month marks the 20th anniversary of the Common Alerting Protocol (CAP) being established as an OASIS Open Standard. CAP, part of the EDXL suite of standards, provides an open, non-proprietary message format for delivering all-hazard alerts and notifications. Over the past two decades, CAP has become a model of global collaboration and a fundamental component of emergency communications systems worldwide. Its use across multiple platforms has helped save countless lives through timely, reliable messaging. Today, 87% of the world’s population lives in a country with at least one national-level CAP news feed for emergency notifications.
CAP enables a consistent message to be disseminated simultaneously over a variety of communication pathways, including radio, television, mobile phones, emails, and other media. This all-hazards, all-media format ensures that critical alerts (e.g., weather events, earthquakes, tsunami, volcanoes, public health crises, power outages, fires, child abductions, and more) reach the public swiftly and efficiently, regardless of the medium.
“As we celebrate 20 years of CAP, I’m incredibly proud that it has become the backbone of emergency communication worldwide, recognized by the UN as the standard for the Early Warnings for All program. The success of CAP is a testament to the dedication and collaboration of so many over the years, and I extend my sincere thanks to everyone who has played a part in making it the global standard it is today,” said Elysa Jones, chair of the OASIS Emergency Management Technical Committee (EMTC). “CAP’s ability to deliver consistent, interoperable alerts through multiple channels has made it indispensable for disaster management. We’ll continue to evolve CAP to ensure it serves communities in need.”
The CAP community will commemorate this significant anniversary milestone at the CAP Implementation Workshop from 22-24 October in Leuven, Belgium. OASIS is a co-sponsor of the event, which will focus on the use of CAP and its consistent use throughout the world. OASIS and the EMTC will continue to work with nations and organizations to explore future advancements in global emergency alerting.
The fundamental need for CAP was identified by the Partnership for Public Warning (PPW) in response to the 9/11 attacks when there was no consistent method for informing the nation. The 2004 Indian Ocean tsunami highlighted the urgent need for improved emergency alert communication across the globe. With the support of Eliot Christian, longtime CAP advocate and former chief architect of the World Meteorological Organization (WMO) Information System (WIS), and Elysa Jones, chair of the OASIS EMTC, along with EMTC members, CAP was officially adopted by the International Telecommunications Union (ITU) in 2007 as ITU-T Recommendation X.1303. Since then, many international organizations like the WMO, the International Federation of Red Cross and Red Crescent Societies (IFRC), and the United Nations Office for Disaster Risk Reduction (UNDRR) have embraced CAP as an essential standard for emergency alerting. In 2021, the Call to Action on Emergency Alerting set a goal to achieve 100% CAP implementation by 2025, an initiative that has since been integrated into the UN’s Early Warnings for All initiative.
OASIS and its partners are committed to increasing global CAP adoption. Participation in the EM TC is open to all through membership in OASIS, with interested parties encouraged to join and contribute to shaping the future of alerting. To get involved in the TC, visit www.oasis-open.org/join-a-tc.
The post OASIS Celebrates 20th Anniversary of Common Alerting Protocol, Global Standard for Alerts and Warnings appeared first on OASIS Open.
The Human Colossus Foundation (HCF) was honored to participate in the Global Digital Public Infrastructure (DPI) Summit in Cairo, under the auspices of H.E. President Abdel Fattah El-Sisi, President of The Arab Republic of Egypt.
The Global Digital Public Infrastructure (DPI) Summit in Cairo was the world's first summit dedicated to DPI. Bringing together a diverse ecosystem of experts. The summit featured insightful keynotes, engaging discussions, and practical focus panels where participants shared real-world DPI implementation experiences. Success stories spanned across national digital identity, payments, government services, and data exchange initiatives. However, discussions highlighted challenges, particularly in cross-governance data exchange and the interoperability layer, signaling a need for improved solutions to ensure a seamless DPI ecosystem across sectors and borders.
At the summit, discussions emphasized the need to start thinking about implementation of DPI, beyond service oriented use cases such as finance, public services, and governance. These sectors have benefited from DPI adoption, but as noted during the summit, there remains much work to do in improving its application across industries.
One of the key discussions revolved around the evolution of DPI. First iteration of DPI provided the initial frameworks for public digital infrastructure, focusing on secure and efficient digital services. However, the future version promises to shift the focus towards interoperability, with a higher emphasis on connecting different systems and ensuring they work together seamlessly. This is a critical development as governments and organizations look to build more integrated, accessible, and collaborative infrastructures. This trend is welcomed by HCF as it goes in the direction the Foundation has been promoting since its creation in 2020, a total interoperability within and across ecosystems.
Data exchange was a recurring theme during the event.In particular, the necessity of cross borders governance frameworks was raised and discussed. While there has been much progress within individual countries or regions, many experts admitted that the global community still lacks a clear pathway to enable effective cross-governance data exchange. The complexities of regulatory frameworks, governance structures, and varying legal standards pose significant challenges. At present, there seems to be no clear consensus on how to tackle this issue, highlighting the urgent need for collaborative innovation. HCF witness these complexities in the projects we are involved (see Governance Periscope blog post of Sep.16). There is not one single digital governance framework that will capture the world’s diversity.
HCF was pleased to see a strong emphasis on inclusion and the need for vendor-agnostic solutions, ensuring that digital public infrastructure is accessible to all, regardless of geography or socio-economic status. This aligns with HCF’s mission of building decentralized, scalable infrastructure that works for everyone, not just for those in advanced economies or within specific vendor ecosystems.
HCF’s vision for a digital infrastructure that scales horizontally was widely accepted at the summit. The need for a common infrastructure that can be applied across various industries and sectors was highlighted as critical for the next phase of DPI development. This closely aligns with HCF's work, which focuses on enabling cross-sector digital infrastructure that is decentralized, scalable, and interoperable.
The first DPI summit was a great success, setting the stage for the continued development of global digital public infrastructures. The next DPI Summit is scheduled for November 4-6, 2025, and it promises to build on the momentum from Cairo, with even more insights and innovations expected to emerge.
HCF is excited to continue contributing to these important discussions, helping shape the global DPI ecosystem and ensuring that it meets the needs of people across all sectors and regions.
In conclusion, the Global DPI Summit in Cairo highlighted the critical role DPI will play in shaping the future digital economy, and HCF’s work in decentralized infrastructure aligns perfectly with this vision. We look forward to further collaborations and innovations in the years to come.
By Erin Miller, Hector Falcon, and Joel Francis, Space ISAC
As space operations become increasingly complex, the need for effective threat intelligence sharing is more crucial than ever. The increase in data transmission across space networks brings both opportunities and heightened risks, as cyber threats increasingly target critical space infrastructure. Protecting these assets demands a coordinated and proactive approach to threat intelligence sharing. To address this, the OASIS global standards body is working with Space ISAC to form the Space Automated Threat Intelligence Sharing (SATIS) Technical Committee (TC). The group will formally launch on Oct 9, but initial members include NSA, Northrup Grumman, Cyware, MITRE, Peraton, and Carnegie Mellon University. SATIS will build on existing frameworks like Structured Threat Information Expression (STIX) and Trusted Automated eXchange of Intelligence Information (TAXII) to help secure space operations against evolving threats…
The post Advancing Cybersecurity in Space at OASIS appeared first on OASIS Open.
1. What is the mission and vision of VoxMind?
At VoxMind, our mission is to revolutionize digital security by providing cutting-edge voice biometrics solutions that protect identities and ensure secure authentication. Our vision is to create a world where identity verification is effortless, secure, and universally trusted—one where your voice is your most secure digital asset. We aim to set the gold standard in voice biometrics, delivering scalable and innovative solutions that address the evolving security needs of individuals and organizations worldwide.
2. Why is trustworthy digital identity critical for existing and emerging markets?
In today’s increasingly digital world, a trustworthy digital identity is crucial for secure transactions, both for established industries and emerging markets. As the global economy becomes more interconnected, consumers and businesses demand frictionless and secure authentication processes. Without trustworthy digital identities, fraud and identity theft risks increase, eroding user confidence. By incorporating secure and scalable biometric solutions like voice authentication, businesses can protect against these threats while delivering seamless customer experiences.
3. How will digital identity transform the Canadian and global economy? How does your organization address challenges associated with this transformation?
Digital identity will enable a secure, efficient, and inclusive global economy. By ensuring secure access to services, whether financial, healthcare, or government, it can streamline operations, reduce fraud, and increase user trust. At VoxMind, we address challenges like identity fraud, AI-driven threats like deepfakes, and the need for easy-to-use solutions. Our voice biometrics technology offers a future-proof solution that can adapt across industries, safeguarding users while simplifying the digital verification process.
4. What role does Canada have to play as a leader in this space?
Canada, through organizations like DIACC, plays a pivotal role in shaping global standards for secure digital identity. With its commitment to innovation and inclusivity, Canada is well-positioned to lead in developing scalable, privacy-preserving solutions that can be adopted globally. By collaborating with global partners, Canada can help set the benchmark for interoperable and secure digital ecosystems that benefit both individuals and businesses.
5. Why did your organization join the DIACC?
VoxMind joined DIACC to be part of a visionary network shaping the future of digital identity. By collaborating with DIACC and its members, we aim to contribute to the creation of secure and interoperable identity standards. DIACC’s mandate aligns with our commitment to protecting individual identities in a scalable, secure, and privacy-preserving manner. As a Sustaining Member, we look forward to sharing our voice biometrics expertise and helping build a secure digital identity infrastructure for Canada and beyond.
6. What else should we know about your organization?
VoxMind is pioneering voice biometrics as a secure, convenient, and adaptive identity verification solution. We address modern security threats such as deepfakes and voice cloning while ensuring seamless user experiences across various industries, including finance, healthcare, and IoT. Our technology is designed to be language-agnostic, scalable, and adaptable to evolving security challenges. As we continue to innovate, we are committed to building partnerships that enhance global security and trust in digital identities.
Do not hesitate to contact us for more information at contact@voxmind.ai
Zug, Switzerland — October 1, 2024 — Energy Web is proud to announce the beta launch of AutoGreenCharge, a mobile app designed to decarbonize electric vehicle (EV) charging. With AutoGreenCharge, users can ensure that every EV charging session is powered by renewable energy. The app is accessible to owners of popular electric vehicles, including Tesla, BMW, Mercedes, and others, bringing the promise of green charging to a worldwide, mainstream audience.
Powered by the decentralized technology of Energy Web’s EnergywebX and secured by the Polkadot blockchain, AutoGreenCharge offers a simple, secure, and verifiable solution to ensure EV charging is not just electric, but 100% renewable. By integrating renewable energy certificates (RECs), the app will automatically match EV charging sessions with clean energy, providing verifiable green charging in real time. While in the beta phase, users can familiarize themselves with the app’s core features and experience the future of EV charging firsthand.
AutoGreenCharge allows EV owners to easily connect their vehicles through a partnership with Smart Car. Once connected, every charging session is automatically tracked, giving users detailed insights into their energy consumption and environmental impact. As the app evolves toward full production, users will be able to retire real renewable energy certificates with each charging session, ensuring their cars are powered by clean, sustainable energy sources. Additionally, they will have the option to specify preferences for the type and location of renewable energy, offering personalized access to solar, wind, and other clean energy sources from around the globe.
Mani Hagh Sefat, CTO of Energy Web, shared, “AutoGreenCharge represents a major step forward in the electrification and decarbonization of transportation. By providing EV owners with a seamless way to ensure their cars are charged with renewable energy, we’re empowering drivers to make more sustainable choices and actively contribute to the global energy transition.”
AutoGreenCharge’s integration with the Polkadot blockchain ensures that every transaction and certificate retirement is securely recorded and verifiable, enhancing transparency and trust in the system. This cutting-edge app is a key development in the broader mission to build a more resilient, efficient, and sustainable energy system.
With the beta version now available, EV owners are encouraged to download the AutoGreenCharge app and start participating in this transformative initiative. The app can be easily found on the testflight Apple and Google Play Stores. As the app moves towards its full production release, users will play a crucial role in refining its features and improving the future of green charging.
For more information, visit Energyweb.org
Energy Web Launches AutoGreenCharge Beta App to Decarbonize EV Charging, Secured by Polkadot was originally published in Energy Web on Medium, where people are continuing the conversation by highlighting and responding to this story.
The fourth revision of the draft NIST SP 800-63-4 Digital Identity Guidelines is now open for public comment.
The FIDO Alliance hosted a webinar on September 24, 2024, with top digital identity experts to discuss the latest updates to the standard and what they mean for passkeys.
Megan Shamas, CMO of the FIDO Alliance, was joined by guests Ryan Galluzzo, Digital Identity Program lead of NIST NCCOE, Teresa Wu, co-chair of the FIDO Alliance Government Deployment Working Group and VP of Smart Credentials at IDEMIA. The panel unpacked the latest changes to the draft and shared what it means for passkeys.
Webinar attendees also had an opportunity to get questions answered before the public comment submission deadline next month. NIST requests that all comments be submitted by 11:59 pm Eastern Time on October 7, 2024.
View the webinar slides below.
Webinar: NIST SP 800-63 Digital Identity Standard: Updates & What it Means for Passkeys.pptx from FIDO AllianceVirginia, US – 30 September 2024 – the Board of Directors of Kantara Initiative has announced that Dr Carol Buttle is to join the organization in the newly created role of Chief Technology Officer (CTO).
Carol joins from the UK government’s Department of Science, Innovation and Technology (DSIT) where she was Head of Certification and Assurance. She brings with her unrivalled expertise in designing trust frameworks and identifying the implications of specific regulations and standards designed to support personal data privacy and security. Carol has input regularly into the development of identity trust frameworks across the globe. As demand for identity certification extends into new markets and territories, she will ensure that Kantara maintains its role as leaders in identity assurance, offering challenge and guidance to support all our members, clients, and partners.
“Standards are everywhere,” said Carol. “We all seek assurance that the products we use, the medicines we consume – even the restaurants we visit – meet the appropriate standards and will not cause us harm. It is no different with identity. I see my role not just about certification and approvals. It is about how I can steer the wider industry to improve for the good of individual citizens, and particularly the most vulnerable.”
We asked Kantara Executive Director, Kay Chopard, about what she sees as the greatest impact of the new role. “Our investment in a Chief Technology Officer demonstrates significant commitment to the future of the industry as a whole, particularly with regard to the potential for international growth and greater interoperability across sectors and territories. Carol’s appointment follows on from the recent arrival of UK-based auditors James Keenan and David Nutbrown as our Head of Certification Delivery and Head of Regulatory Compliance, respectively. Their expertise greatly strengthens our existing operations and will secure confidence in the future direction of certification and identity assurance.”
Commenting on Carol’s appointment, Kantara Board Chair Andrew Hughes stated: “Carol’s appointment brings a real depth of regulatory and operational expertise to our leadership. It underpins the valuable contribution we already make through our Work Groups and certification programs. Carol brings with her thorough knowledge and expertise of UK requirements and how they might apply in the US. This will benefit those Kantara members who are engaged with US Federal Agencies or those wishing to become certified under the UK Digital Identity & Attributes Trust Framework (DIATF).”
Click here to understand more about our US assurance program approval process
The post Dr Carol Buttle joins Kantara Initiative as Chief Technology Officer (CTO) appeared first on Kantara Initiative.
In this episode, Lisa and Bryce are joined by privacy advocacy expert Zach Edwards as they sit down and discuss the hidden world of Identity Resolution and Customer Data Platforms.
The post “Unsafe at Any Click” – Episode 4 appeared first on Internet Safety Labs.
In our recently published research on the worldwide web of commercial surveillance, we took a close look at the global infrastructure connecting and correlating personal information across platforms, devices, and even from physical world sources like point-of-sales systems. The connectivity is, in a word, staggering. At some point, however, there is a first-party relationship with a data subject. From that starting point, personal information is systematically being shared with countless entities including data brokers. In such a hyper-interconnected infrastructure, how can a single publisher make promises about where customer data is going? Moreover, how could a user possibly consent to the sharing of their data with thousands of recipient organizations? But the complexity and unknowability of system behavior isn’t just with these hyper-interconnected marketing networks. As we touched on in a recent podcast with Zach Edwards, very large platforms (like Google, Facebook, X) are just as complex and opaque as the identity resolution and customer data platform networks. Software is increasingly a leaky, hyper-connected, unpredictable sieve of personal data sharing. In this blogpost, we take a closer look at the opacity and leakiness of the “big dogs”–large online platforms with hundreds of millions and billions of users.
1. Types of Commercial Identity ResolutionI’ve been digging into this more since the publication of our research on identity resolution and customer data platforms and have revised my framing of identity resolution. To wit, I observe three co-existing types of commercial identity resolution architectures or systems happening in the world:
The first one I call distributed by design. This is the LiveRamps, The Trade Desks, mParticles, etc. of the world. These systems enjoy the power of massive data aggregation with [too] little of the risk and responsibility, as they are designed to be third-parties relative to the data subject. These platforms are architected to ingest and process (resolve) personal information from a disparate array of services and devices. The second one I call company-centric. This is the “big dog” platforms with millions or billions of users; the universes unto themselves. A company-centric identity resolution can also be distributed by design in the sense that it provides numerous small pieces of functionality which can be embedded as third-party resources into other companies’ apps and websites, allowing the big dogs to collect data external users despite not necessarily having direct relationships with them. Microsoft is a good example of this. It’s also true that company-centric identification schemes can and are ingested by distributed systems like LiveRamp. The lines implied by these two categories are fuzzy. The third one I call standardized. This is the hiding-in-plain-sight globally coordinated efforts in Unified ID 2.0 and European Unified ID. Note that these efforts are championed primarily by distributed by design identity resolution and customer data platforms. Scanning the partners of just the Unified ID 2.0 standard is enough to give one pause: these are the platforms that want to know who you are and what you’re doing at all times. Notably absent are the big dogs.A brief word about national/governmental identification schemes, like India’s Aadhaar and the US Internal Revenue Service’s id.ME: these systems operate somewhat like a big dog company-centric identification system, orchestrating personal information across their own services, with the exception that we don’t expect these systems to be either ingesting or sharing data with external, commercial platforms1. At Internet Safety Labs (ISL), we rate “big dog” platforms as critical risk “data aggregators”2. We do so for the following two reasons:
These corporate entities monetize personal information, either through ownership of advertising platforms, the selling of audience information, or other monetizing behaviors, and These entities run multiple consumer products and services with inadequate transparency of how personal information flows across product lines.The remainder of this post takes a closer look at Google and Facebook (Meta) personal data strategies and why they’re so risky.
1.1 GAIA and Narnia: Google’s Universal Identification and Cross-Product Personal Data Aggregation Grand PlanIn the wake of the recent Google search antitrust case in the US, Jason Kint published a long thread on a recently unsealed 325-page Google strategy document. The document titled “Display, Video Ads, Analytics and Apps” contains a coordinated and synthesized set of business strategies describing how Google can:
More effectively coordinate the extraction of user information, Better leverage user data across all of their AdTech, and In general, increase ad revenues across its entire portfolio of products and services: “make it easier to add monetization to other Google O&O [owned and operated] properties.”3The document also covers how Google doesn’t make as much money from sites it doesn’t own and would like to assert its control to make them more like sites it does own, thereby increasing revenues. Nearly every product line’s strategy contained in the document mentions the use of “GAIA signals” or “GAIA data”. GAIA is Google’s proprietary “universal ID”4. The plan clearly outlines how they can better utilize the massive trove of personal information joined by their GAIA “universal IDs”, amassed across their various owned and operated (O&O) properties, like Gmail and Chrome to name two of the largest. This highlight from page 126 (section on “Smart Campaigns”) makes clear Google’s intention to share user information across all its properties to enrich their advertising services (project Narnia and Narnia2): But it’s not enough to join user data across Google properties; they also indicate an intention to join external data sources, such as streaming and TV ad networks (pg 150): The second highlighted section above describes the ingesting of external customer data and resolving the data (i.e. identity resolution) to Google’s GAIA IDs. Overall, the document describes an organization-wide, orchestrated plan to amass and unify user data (via GAIA IDs) to better leverage Google ads (Narnia 2.0) for both internal and external properties. How can Google users understand–nevermind consent– to the use of their personal information in this wide-reaching way?
1.2 Facebook Admits Unknowability of User Data ProcessingOne of my favorite references for explaining why the world needs software safety labels is this story about two Facebook architects explaining how it’s virtually impossible for Facebook to know where user data is going. The complexity and dynamism of software is making it so it’s not a bounded system—and it’s never the same river twice. The story came out two years ago and I recently read the discovery document written in April 2021 and it is really good. This excerpt outlines the fundamental problem of the unknowability of Facebook software’s behavior: And this: The discovery document contains fascinating information on what Facebook must do to track personal data usage within its system [implement Curated Data Sources and a Purpose Policy Framework], and it’s a massive undertaking: 450-750 engineer years over three calendar years. And even that’s not enough. It also requires “heavy Ads refactoring and adoption work in warehouse.” Let’s go back to that “closed form system” described by the Facebook engineers. It comes from mathematics’ “closed-form expression”, describing an equation comprised of “constants, variables and a finite set of basic functions connected by arithmatic operations and function composition.”5 If we look at realtime bidding as one example of a programmed system, we see that it is necessarily dynamic and unbounded. The participants (buyers) in the realtime bidding network are dynamic; also the ad inventory itself is dynamic. Realtime bidding is, by design, never the same river twice. The system is not a closed form system. Machine learning (ML) is another example: virtually all of the ML technologies generating much recent hype are also not closed form systems by design. They are constantly changing based on the training set, based on ongoing learning, and based on dynamic rule-making.
2. Have We Agreed to Be Always Known and Tracked Online?To summarize the situation: industry has developed techniques (distributed by design and company-centric) to interconnect and aggregate personal information such that we are always known and tracked online. As noted in the earlier mentioned research paper, there are at least $9T (as in trillion) worth of industries that want to know who we are and what we’re doing at all times. It’s unlikely that we can stop this financially motivated juggernaut of universal identification. So what’s to be done?
2.1 To Do List Consent is dead. It’s impossible and the more we pretend like it’s possible to have informed consent when it comes to the unbounded nature of software, the more we are lying to ourselves. Privacy policies protect companies but not the people who use technology. Know how you’ve consented into the worldwide web of commercial surveillance? It’s through this phrase found in many privacy policies: “…and we [may] share your data with our marketing partners.” We need more exposure of actual measured software behavior (ala ISL’s App Microscope: https://appmicroscope.org/app/1579/). One day, it will be possible for systems to generate machine-readable records of processing activities–a kind of passport stamp showing how your data was processed (used by first party, shared and used by third parties). This will be a landmark moment in empowering people through transparency of actual system behavior. Data broker regulation is inadequate. If a platform has your data, it should de facto have a first party relationship with you, and as such, you are entitled to all the proactive data governance rights allowed to you. In other words, nothing about me, without me. Data brokers aren’t and never have been just 3rd parties. Note that these data rights are unexercisable if people don’t know that they’re actually in a relationship with a particular platform. Thus, there also needs to be a requirements for these platforms to proactively notify all data subjects for which they hold information. Is the selling of personal information safe for humans and humankind? We’ve agreed as a society that certain things are sacrosanct and the selling of which unacceptably degrades and devalues them (such as votes, organs, children). We need to have a much deeper think about whether or not personal information should fall in that category. Are data broker laws effective in their current form? It seems clear to ISL that all actual data brokers are not currently registered in the states requiring registration. Privacy and safety experts–and perhaps regulatory experts–need to get more aware of and involved in the two universal commercial identification standards (Unified ID 2.0 and European Unified ID) pronto. Identity resolution platforms and customer data platforms demand substantially more regulatory attention. Minimally, the massive troves of personal information are ripe for data breaches. Maximally, the public needs assurances that platforms that are amassing this data are held to accountability.Footnotes: Note that ISL has not confirmed this. List of ISL designated data aggregators at the time of this writing: Adobe, Amazon, Apple, Google, Meta, Microsoft, and X. https://storage.courtlistener.com/recap/gov.uscourts.vaed.533508/gov.uscourts.vaed.533508.1132.2_1.pdf, page 7. See ISL paper on Identity Resolution and Customer Data Platforms for more information on universal identification schema. https://en.wikipedia.org/wiki/Closed-form_expression
The post Identity Resolution and the Big Dogs appeared first on Internet Safety Labs.
The fourth revision of the draft NIST SP 800-63-4 Digital Identity Guidelines is now open for public comment.
The FIDO Alliance hosted a webinar on September 24, 2024, with top digital identity experts to discuss the latest updates to the standard and what they mean for passkeys.
Megan Shamas, CMO of the FIDO Alliance, was joined by guests Ryan Galluzzo, Digital Identity Program lead of NIST NCCOE, Teresa Wu, co-chair of the FIDO Alliance Government Deployment Working Group and VP of Smart Credentials at IDEMIA. The panel unpacked the latest changes to the draft and shared what it means for passkeys.
Webinar attendees also had an opportunity to get questions answered before the public comment submission deadline next month. NIST requests that all comments be submitted by 11:59 pm Eastern Time on October 7, 2024.
Watch the presentation below.
Kia ora,
Recent weather extremes in Aotearoa reminds us of life’s delicate balance. As I gaze at our last daffodils and watch the remaining lamb playing in the paddock – having sadly lost two to the cold earlier in the month – I’m reminded of the circle of life. This theme resonates with the recent closure of the Open Identity Exchange (OIX) at the end of August, a significant player in the digital identity industry, also facing a challenging financial climate.
Fortunately, Digital Identity NZ is thriving, with more organisations joining our mission. We’re excited to welcome 3PlusConsulting, Arrowhead, BeingAI, PaymentsNZ, QubitCyber, techHappy and Voco as new members. A heartfelt thank you to all our members who contribute to our mahi.
In recognition of International ID Day, individual member Vica Papp shared a blog post highlighting its significance. We continue to make progress, as seen in the PaymentsNZ – DINZ Digital Identity May 2024 sprint report, with more updates on collaborations to come.
I had the honour of being a panellist alongside the mighty Holly Rennie, Ralph Bragg and Adrian Smith at FSC24 on September 4. We explored how global innovation is shaping New Zealand’s future in ‘FinTech Innovation and Open Banking.” Don’t forget to take advantage of our 10% discount offer for DINZ news readers to attend The Point 2024 – thank you Payments NZ! More information below.
Still basking in the glow of Digital Trust Hui Taumata 2024, we’re excited to share the wealth of content including the Opening Keynote on Trust Frameworks and an Innovation Spotlight on identity-centric solutions for enterprise security. Attendees received links to the presentations, and our Coffee Chat attendees engaged with Slido questions posed to our speakers and panellists – sparking great discussions! Look out for more insights from the Hui coming soon.
Recently, DINZ members AWS, Worldline and Xebo along with myself, met with Minister Collins to share our observations on the landscape. We left the meeting with a clear understanding of her expectations, reflecting a very positive engagement.
Looking ahead, we’re entering DINZ’s annual election cycle for the Executive Council. As we begin this process, we want to take a moment to sincerely thank and recognise our outgoing Councillors. Their contributions have been invaluable in shaping the direction and growth of our community.
We eagerly look forward to welcoming the next cohort of passionate members, ready to step into these important roles. This is the essence of spring – embracing new opportunities and growth while acknowledging what has brought us to this point.
Ngā mihi
Colin Wallis
Executive Director, Digital Identity NZ
Read the full news here: Spring Clean | September Newsletter
SUBSCRIBE FOR MOREThe post Spring Clean | September Newsletter appeared first on Digital Identity New Zealand.
Imagine if barcodes not only speed up your grocery checkout but also transform logistics, healthcare, and the overall efficiency of global supply chains.
In this episode, hosts Reid Jackson and Liz Sertl are joined by Rich Eicher, Director of codeREADr. With his extensive experience in barcode innovation, Rich shares insights into how modern camera-based barcode readers surpass traditional laser readers and why dedicated barcode scanning devices are preferred in specific environments.
Rich explains barcodes' critical role in various business applications, from facilitating accurate inventory management to preventing costly supply chain errors. He also elaborates on the industry's adaptation to consumer demands, the significant challenges of barcode inaccuracies and their impact on delivery services, and how advancements in AI and ChatGPT are poised to revolutionize data capture and processing across industries.
In this episode, you’ll learn:
The differences between laser and camera-based barcode readers commonly used in grocery stores.
The importance of barcodes in various business applications and the issues caused by barcode discrepancies in the supply chain.
The upcoming GS1 Sunrise 2027 initiative is transitioning to QR codes for enhanced data capture.
Jump into the Conversation:
[00:00] Introducing Next Level Supply Chain
[01:01] Who Rich Eicher is, and what he does
[03:08] The different barcodes and their significance
[10:13] All about laser reading barcodes
[13:12] The importance of using barcodes and why companies are shifting to using them more
[16:22] The problems that come along with not using barcodes
[22:16] Other trends happening outside of barcodes
[27:22] Rich’s favorite technology he is using right now
[29:51] Closing
Connect with GS1 US:
Our website - www.gs1us.org
Connect with the guest:
Rich Eicher on LinkedIn
At the Digital Trust Hui Taumata in August, Juliana Cafik, Microsoft’s Identity Standards Architect, delivered a thought-provoking keynote on global digital identity frameworks. With over 27 years of experience, Juliana provided valuable insights into how trust frameworks enable secure, interoperable digital transactions worldwide.
What are Digital Trust Frameworks?
Trust frameworks establish rules for:
Ensuring secure digital interactions Verifying identities in both public and private sectors Applying region-specific levels of assurance for various transactionsGlobal Examples of Trust Frameworks
eIDAS (European Union)
Goal: Enable cross-border identity verification for 80% of EU citizens by 2030 Focus: Electronic signatures, digital wallets, and consistent identity proofing across member states.DIACC (Canada)
Focus: Economic growth through the adoption of digital identity Approach: Public-private collaboration to ensure compliance and usabilityNew Zealand’s Unique Position
Juliana praised New Zealand’s approach, emphasising these key elements:
Focus on Safety: New Zealand uniquely integrates safety into its trust framework Key Framework Pillars: Identity management, privacy, security, data management, and facilitation Collaborative Potential: Strong public-private partnerships can enhance adoptionChallenges and Opportunities
Global Challenges: Trust frameworks are complex, evolving alongside technology and regulations. Opportunity for New Zealand: Learning from other countries’ implementations, New Zealand can lead in trust framework innovation.Conclusion
Juliana encouraged New Zealand to embrace its potential by fostering collaboration across sectors to build a robust, trusted framework that supports digital identity verification and secure interactions.
The post DINZ Hui Keynote: Juliana Cafik on Digital Identity Trust Frameworks appeared first on Digital Identity New Zealand.
In today’s digital landscape, data breaches and cyber threats are rapidly evolving. At a recent conference, Marc Airo-Farulla, Regional Sales Director of Entrust, discussed the importance of identity-centric security solutions as the cornerstone for enterprise data protection. He highlighted how identity management is reshaping cybersecurity strategies and shared insights on how organisations can safeguard their data effectively.
Shifting Threat Landscape
Traditional security methods are no longer enough: Perimeter defences like firewalls fail to address evolving threats. Phishing remains a top concern: 85% of security breaches stem from phishing attacks, often exploiting employee vulnerabilities.The Role of Identity Access Management (IAM)
Identity is now the frontline defence: Managing user access, including employees and contractors, is critical to mitigating risk. Fragmentation challenges: As more systems and technologies are introduced, IAM becomes harder to manage and protect, leading to vulnerabilities.Zero Trust Frameworks
Adopt a zero-trust mindset: Zero trust means no user or device is trusted automatically, even within the organisation’s network. Data security first: Protecting sensitive data is key, ensuring minimal access and maintaining strict oversight of who has access to what.Real-World Breach Examples
Marc shared a cautionary tale of an Australian e-subscription company that collapsed after a cyberattack. Within two months, the business was wiped out, emphasising the dire consequences of inadequate cybersecurity.
The Future of Security with Entrust
Investing in live identification solutions: Entrust has been at the forefront of developing future-proof security systems, like their partnerships with Onfido for live identification. Securing digital assets: Using tools like hardware security modules (HSM) can safeguard critical business data.Best Practices for Enterprises
Educate and train employees: From C-suite executives to the newest team members, everyone needs to understand the importance of security measures. Limit data collection: Only store what is necessary, minimising the risk in case of a breach.Identity-centric security solutions are the future of enterprise protection. By shifting the focus to identity management and implementing zero-trust frameworks, organisations can better protect their digital assets and reduce the risk of devastating breaches.
Thanks to Entrust for this insightful talk, and for supporting the Digital Trust Hui Taumata in 2024.
The post DINZ Hui Innovation Spotlight: Why Identity-Centric Security is Crucial for Enterprise Protection appeared first on Digital Identity New Zealand.
Observed annually on 16 September, International Identity Day aligns with the United Nations’ Sustainable Development Goal (SDG) 16.9, which aims to provide legal identity, including birth registration, to all people by 2030. While most of us can easily prove our identity, for millions around the world, the lack of legal identity remains a significant barrier to accessing even the most basic services.
If you’re reading this, the chances are that you have a legal identity and can prove it. But without one, life is vastly different. Without legal documentation, children may miss out on vaccinations and education, adults are unable to secure formal employment, access healthcare or welfare, vote in elections, start a business, use banking services, travel abroad, register their children’s births, or even claim inheritance or pensions. In effect, you don’t officially “exist.”
The Global Picture: Millions Left BehindIn 2021, the World Bank’s ID4D Global Dataset reported that over 850 million people worldwide had no way to prove their identity. Around 540 million of these individuals were in Africa, and half of all women in low-income countries lacked identification. Although progress has been made since then, it is difficult to determine how much progress has been achieved until the next dataset is published.
Globally, civil registration programmes are being increasingly integrated with healthcare systems to ensure children are enrolled early. National birth registration initiatives have been launched or accelerated in countries like Cameroon, Zimbabwe, Nigeria, and Papua New Guinea. This issue is not confined to low-income nations, as high-income countries, including Australia, are also making strides in streamlining identity processes. A recent pilot in New South Wales, for instance, allowed parents to register the birth of their baby across federal and state government agencies using a single account. This “tell us once” approach eliminates the need for parents to interact with multiple government agencies—reducing up to seven separate interactions.
The Challenges Closer to HomeWhilst Aotearoa New Zealand may seem far removed from these global statistics, we are not without our own identity challenges. Certain groups in our population struggle to prove who they are, including rural communities, blind and deaf citizens, former refugees, unhoused people, those who have escaped domestic abuse, and individuals recently released from prison. These groups often remain legally invisible. This raises the question: can Aotearoa help address this before 2030, or will we run out of time to meet the goal?
Take Action and Learn MoreInternational Identity Day serves as a reminder that the right to identity is fundamental. Without it, people are denied basic human rights. For a deeper understanding of this global issue and the progress being made, we recommend listening to the ID16.9 podcast, which offers valuable insights into the state of identity inclusion around the world. The podcast is available on Spotify, Apple, Google, or at ID16.9 podcast.
As we move towards 2030, it is clear that achieving universal legal identity is not just about meeting a target set by the UN. It’s about unlocking access to opportunities, dignity, and human rights for everyone. Let’s continue to push for meaningful change, both here in Aotearoa and globally.
By Vica Papp
The post International Identity Day 2024: Why Legal Identity Matters for Everyone appeared first on Digital Identity New Zealand.
On September 18, 2024, Blockchain Commons held its second Round Table on FROST. Almost twenty expert cryptographers, designers, and developers came together to share the challenges and successes they’ve had with FROST over the last year as well as the advances of their differing approaches.
A full log of the meeting is now available, including video, rough transcript, rough summary, and PDFs of all of the presentations.
Our next FROST meeting will be a FROST Developers meeting, focused on helping wallet developers to implement FROST (and why they might want to). It’s scheduled for December 4th. Sign up for our Gordian Developers mailing list or Signal channel to receive an invite.
Thank you to HRF for supporting Blockchain Commons’ FROST work in 2024.
Join us for a series of online conversations about the work of strengthening information ecosystems in these regions.
The post Join our October online event series: strengthening information ecosystems appeared first on The Engine Room.
1. What is the mission and vision of Docusign?
Docusign’s mission is to bring agreements to life by accelerating the process of doing business and simplifying people’s lives. With its Docusign IAM platform, Docusign unleashes business-critical data that is trapped inside of documents and disconnected from business systems of record, costing businesses time, money, and opportunity. Using Docusign IAM, companies can create, commit, and manage agreements easily. Focusing on the ‘commit’ capability, where identity verification is more relevant, Docusign’s extensive portfolio of identity verification solutions make it simpler for stakeholders to commit to agreements through advanced, AI-enabled identity verification solutions and multiple levels of authentication. Supporting capabilities such as Phone authentication, ID verification, biometric detection, and FINTRAC-compliant-workflows, not only can signers easily confirm their identities, but senders (i.e. businesses) can also securely capture and store the identity information provided during the agreement completion process. This ensures that all parties are who they claim to be, and agreements are enforceable.
2. Why is trustworthy digital identity critical for existing and emerging markets?
Over the last few years we’ve seen the highest volumes of fraudulent cases ever on record (Cifas, Fraudscape 2024). Therefore, it’s understandable why we’re starting to notice drastically higher levels of regulatory scrutiny, and more requirements being imposed on businesses in terms of introducing strong identity verification methods for digital interactions. This scrutiny isn’t just in mature markets, but also in emerging ones where the rapid-adoption of new technologies that’s accompanying fast-paced growth, is necessitating urgent regulatory oversight. Therefore, having trustworthy digital identity for both existing and emerging markets is essential for secure, efficient, and inclusive digital economics. For emerging-economies specifically, widely available identity verification tools that are easy to use can help promote secure, sustainable and long-term economic growth that ensures equitable access to increasingly digital services.
3. How will digital identity transform the Canadian and global economy? How does your organization address challenges associated with this transformation?
The local and international benefits of digital identities are transformative, in the sense that they can help enhance the security of increasingly digital interactions, improve the efficiency of these interactions, and also make them more accessible across various sectors. For example, the adoption of digital identities can further enhance trust in online interactions, making it easier for consumers to engage in e-commerce transactions. This will likely lead to the expansion of the digital economy in Canada, where secure and convenient online shopping experiences will become safer and, therefore, more adopted. Globally, this could drive the growth of e-commerce, particularly in developing economies where digital identities can securely bridge the gap between offline and online markets. That’s where Docusign’s portfolio of identity verification solutions comes in. Our extensive portfolio offers enhanced signer identification and authentication capabilities built into any agreement workflow, enabling organizations to transact a full range of agreements with increased trust and ease of use.
4. What role does Canada have to play as a leader in this space?
Canada has introduced a series of anti-fraud initiatives that have made a significant impact on combating various forms of fraud across the country (public awareness campaigns, industry collaboration (‘Canadian Bankers Association’s Fraud Prevention Month’), strong legislative frameworks (FINTRAC), etc.). These initiatives have made significant progress in reducing fraud, increasing awareness, and improving recovery efforts locally. By advocating for similar initiatives internationally, Canada can influence the global development of digital identity systems to ensure other countries can reap similar benefits. Through innovation, collaboration, and advocacy, Canada can help ensure that digital identity becomes a force for good in the global economy.
5. Why did your organization join the DIACC?
Focused on securing the online agreement space for everyone, Docusign joined the DIACC to help shape the future of digital identity in Canada and contribute towards developing a more secure, inclusive and beneficial digital agreement ecosystem for its Canadian customers. Being able to collaborate with like-minded industry leaders and drive innovation across the country makes DIACC membership a valuable investment for Docusign.
6. What else should we know about your organization?
Agreements are based on intention and identity: organizations need to be able to trust that signers are who they say they are. The standard practice of verifying a signer’s identity is to send a link to the signer’s email address. But agreement value, sensitivity, business risk, regulation, or legal requirements can drive the need for enhanced identification. The challenge is to deliver stronger verification, while keeping the overall experience user-friendly. That’s where Docusign Identify comes in. Identify provides a portfolio of enhanced signer identification and authentication capabilities built into the agreement workflow, enabling organizations to transact a full range of agreements with increased trust. These solutions include: * ID Verification: FINTRAC-compliant digital identity proofing of participants in agreements workflows via biometric checks such as AI-enabled liveness detection, verification of passports, driver licenses, or permanent resident cards * Phone Authentication: multi-factor authentication via text message or phone call * ID solutions for digital signatures: meet requirements for UK and EU electronic identification, authentication and trust services (eIDAS) compliant Advanced (AES) and Qualified Electronic Signatures (QES) * Network of trust service solutions: Easy access to our tightly-integrated global network of trust service providers for region-specific compliance To learn more, visit www.docusign.com/en-ca/products/identify
Join us on the latest episode of the Identity at the Center podcast as we explore the critical components of a successful IAM program. We break down the key elements required to build a solid foundation for your IAM program and set you up for success.
Watch at https://www.youtube.com/watch?v=5-kRe187AG0 or listen in your podcast app.
We are delighted to announce that Paul Knowles, Head of the Advisory Council at the Human Colossus Foundation (HCF) and co-founder of the Foundation, will attend the 9th International Conference on Data Mining & Knowledge Management (DaKM 2024) in Copenhagen, Denmark, on the 21st-22nd September 2024.
DaKM 2024, September 21-22, 2024, Copenhagen, Denmark
Paper PresentationIn addition to his active role at HCF, Paul will present his paper titled "Data-Centric Design: Introducing an Informatics Domain Model and Core Data Ontology for Computational Systems."
This paper marks a significant leap forward in redefining system architectures through the Informatics Domain Model and Core Data Ontology (CDO), promoting a shift from traditional node-centric designs to a data-centric paradigm. These models enhance data security, semantic interoperability, and scalability across distributed data ecosystems with their quadrimodal domain structure: objects, events, concepts, and actions.
You can find further details and the abstract of the paper here.
As part of his contribution to DaKM 2024, Paul has been invited to chair Session 3. The session will cover various topics, from AI-powered assistive technologies to virtual reality and intelligent community-driven platforms. It promises to explore cutting-edge solutions with potential applications in HCF's ongoing initiatives around distributed data ecosystems and AI development.
Paul will oversee discussions on the following topics during Session 3 at DaKM 2024:
Topic 1: An Immersion Sailing Experience and Simulation Feedback System for Disabled People using Artificial Intelligence and Virtual Reality – Presented by HoiNi Yeung and Ang Li, this talk will showcase a virtual reality sailing simulator designed to help individuals with disabilities practise sailing in a realistic environment using AI and VR technology.
Topic 2: An Intelligent Robot Arm used to Automate Chores to Eliminate Time Waste using Computer Vision – Presented by Yifei Zhang and Jonathan Sahagun, this presentation will cover the use of computer vision and AI to automate household tasks, improving adaptability and efficiency in daily chores.
Topic 3: Enhancing Indoor Environments through Augmented Reality and Artificial Intelligence for Personalised Plant Integration – Presented by Yingqi Wang and Marisabel Chang, discover how AR and AI are used in PlantAR to enhance indoor environments by providing personalised plant recommendations, promoting better air quality and well-being.
Topic 4: A Smart Community-Driven Tutoring Mobile Platform using Artificial Intelligence and Machine Learning – Presented by Haoyun Yang and Yu Cao, this platform leverages AI for personalised quizzes, encouraging peer-to-peer learning and technological innovation in education.
Topic 5: An Intelligent System to Help Individuals with Mobility Issues Crack Eggs using an App and a Bluetooth-Connected Mechanical Device – Presented by Alexander Xu and Jonathan Sahagun, a Bluetooth-enabled device designed to help individuals with mobility issues by automating egg-cracking using machine learning.
Topic 6: Medifact: A Reliable Mobile Application for Combating Medical Misinformation using Verified Data Sources – Presented by Annabel Shen Tu and Andrew Park, this mobile app tackles the spread of medical misinformation through verified health data and AI-driven validation processes.
Topic 7: An Intelligent Mobile Platform to Recognise and Translate Sign Language using Advanced Language Models and Machine Learning – Presented by Arlene Chang and Jonathan Sahagun, this platform translates American Sign Language (ASL) into English and vice versa, bridging communication gaps between Deaf and hearing individuals using AI.
Topic 8: A Smart Medicine Mobile Platform for Injury Diagnosis and Mental Stress Management using Artificial Intelligence and Machine Learning – Presented by Zelin Jason Hu and Garret Washburn, this mobile app provides AI-generated injury diagnoses and mental stress management solutions, improving accessibility to healthcare.
Topic 9: A Policy Report Evaluating the National Assessment Program for Literacy and Numeracy (NAPLAN) Reform in Australia – Presented by Wenya Zhang, a critical evaluation of the NAPLAN reform, focusing on its impact on students and proposing policy improvements for standardised testing in Australia.
For more details about the programme schedule, visit the Programme schedule.
If any of these topics align with your work in distributed data ecosystems or DDE-related issues, don't hesitate to contact Paul Knowles or the HCF advisory team to explore potential synergies. Email: Ac@humancolossus.org
About Paul Knowles
Paul Knowles is a leading figure in decentralised semantics and co-founder of the Human Colossus Foundation. He chairs the Decentralised Semantics Working Group and has over 25 years of experience in pharmaceutical biometrics, having worked with companies such as Roche, Novartis, GlaxoSmithKline, Amgen, and Pfizer. His contributions include the Overlays Capture Architecture (OCA) for semantic interoperability. He also holds advisory roles at Secours.ai and Global Privacy Rights at 0PN Governance Architecture.
About the Human Colossus Foundation
At the Human Colossus Foundation, we envision a Dynamic Data Economy (DDE) where data is harmonised, secure, and framed by robust governance principles. Our mission is to empower businesses and individuals with the tools and frameworks they need to make better-informed decisions through real-time, accurate data. The DDE bridges existing standards while embracing new data-centric structures that respect human and jurisdictional differences.
Are you concerned about data brokers and commercial surveillance, but never heard of identity resolution platforms? This webinar is for you! This webinar provides an overview and explanation of the infrastructure that powers the worldwide web of commercial surveillance, which is a data aggregating force, powering data brokers and a lack of online anonymity. In this webinar, we look deeply at:
Identity resolution and customer data platforms, How they work, Why they’re risky, and what you can do to protect yourself.The target audience for this webinar is for privacy professionals (lawyers, regulators, and industry) and concerned users of technology.
Geekiness Level: Medium
The post Webinar: The Worldwide Web of Commercial Surveillance Identity Resolution & Customer Data Platforms appeared first on Internet Safety Labs.
Blinder, Cranium, Cyware, Dell Technologies, Fr0ntierX, Harvey, HiddenLayer, Invariant Labs, Lasso Security, Legit Security, Logitech, Mozilla, Styrk AI, Thomson Reuters, TrojAI, and VE3 Join a Growing Roster of Organizations Committed to Advancing AI Security
Boston, MA – 19 September 2024 – The Coalition for Secure AI (CoSAI), an OASIS Open Project that launched on 18 July 2024, is announcing the addition of EY, Protect AI, Trend Micro, and Zscaler as its newest Premier Sponsors. These industry leaders join CoSAI’s expanding alliance of organizations, which now includes more than 30 partners dedicated to advancing the security of artificial intelligence (AI). Together, they support CoSAI’s mission to develop and share open-source methodologies, standardized frameworks, and tools for secure AI development and deployment.
CoSAI is a collaborative open-source initiative designed to give all practitioners and developers the guidance and tools they need to create Secure-by Design AI systems. Three strategic workstreams have been established within CoSAI, with plans to add more over time: software supply chain security for AI systems, preparing defenders for a changing cybersecurity landscape, and AI risk governance.
In addition to welcoming new Premier Sponsors, CoSAI is pleased to introduce its latest General Sponsors: Blinder, Cranium, Cyware, Dell Technologies, Fr0ntierX, Harvey, HiddenLayer, Invariant Labs, Lasso Security, Legit Security, Logitech, Mozilla, Styrk AI, Thomson Reuters, TrojAI, and VE3. These organizations further diversify and strengthen CoSAI’s community of stakeholders committed to advancing AI security.
“Joining CoSAI underscores the EY organization’s dedication to fostering innovation while at the same time enhancing the security and integrity of AI technologies,” said Yang Shim, EY Americas Technology Consulting Leader. “By working alongside other industry leaders, we aim to contribute to the development of robust frameworks that will empower enterprises and individuals to shape the future with confidence through the secure integration and deployment of AI,” added Kapish Vanvaria, EY Americas Risk Leader.
“At Protect AI we are on a mission to create a safer AI-powered world. As the prevalence of AI within organizations grows, so must the ability to secure it,” said Ian Swanson, CEO and Co-founder, Protect AI. “We are proud to join CoSAI as a Premier Sponsor. Through this collaboration, we aim to help shape the development of frameworks and standardized MLSecOps processes that enhance the security, safety, and trust for AI applications across industries.”
Eva Chen, CEO at Trend Micro, said, “We are dedicated to leading the charge in securing AI deployment, ensuring that security is seamlessly embedded from the ground up. Our collaboration with CoSAI reflects our commitment to pioneering efforts that not only protect organizations but also leverage AI to enhance security and uphold the trust of consumers. By bringing together industry leaders, we aspire to set new standards for the integrity and safety of AI systems, driving positive change across both the industry and broader society.”
“Zscaler is proud to join CoSAI to collaborate with industry leaders. Our collective aim is to establish best practices that ensure AI technologies are not only innovative but also trustworthy,” said Deepen Desai, Chief Security Officer, Zscaler. “This partnership will enable Zscaler to leverage the power of AI in order to deliver the most advanced security solutions for our customers. Through this collaboration, we’re striving to set a new standard for AI-driven security that prioritizes transparency, reliability, and excellence.”
These Premier and General Sponsors will join forces with CoSAI’s founding Premier Sponsors – Google, IBM, Intel, Microsoft, NVIDIA, and PayPal – and founding General Sponsors, including Amazon, Anthropic, Cisco, Chainguard, Cohere, GenLab, OpenAI, and Wiz. With the support of these industry leaders and experts, CoSAI is poised to make significant strides in establishing standardized practices that enhance AI security and build trust among stakeholders globally.
Participation
Everyone is welcome to contribute technically as part of the CoSAI open-source community. OASIS welcomes additional sponsorship support from companies involved in this space. Contact join@oasis-open.org for more information.
Support for CoSAI
Blinder:
“AI is the most transformative technology of our generation. As attorneys and corporate legal departments adopt AI, data and IP security are at the forefront of their priorities. Blinder is proud to join CoSAI and further the mission of accelerating secure AI development. The open source OASIS model aligns with our focus on fair use IP, and democratizing AI security.”
— Nils Tracy, CEO & Founder, Blinder
Cranium:
“Cranium is proud to join CoSAI to advance AI security. As the leading enterprise AI security and trust software firm, we believe that by sharing our insights and best practices with other industry leaders we can collectively and securely develop and deploy AI. Only through collaboration can we truly strengthen AI security to build trust in each organization’s and third-party AI.”
— Felix Knoll, COO & CRO, Cranium AI, Inc.
Cyware:
“AI is transforming cybersecurity, enabling speed and scale at unprecedented levels. However, the opportunity AI presents is only matched by the risk it introduces. We are committed to developing secure, ethical AI to not only protect our systems but also to build trust with our clients and the broader community. Joining CoSAI was a natural decision, aligned with our mission to drive innovation while ensuring that safety and integrity are at the core of everything we do.”
— Sachin Jade, Cyware Head of Product
Dell Technologies:
“We share an unwavering commitment to collaboration and innovation within the AI ecosystem which includes empowering organizations globally to adopt AI safely and securely. By working alongside industry leaders in the Coalition, we aim to help establish necessary industry standards and contribute to the development of secure open-source solutions.”
— John Roese, Global Chief Technology Officer and Chief AI Officer, Dell Technologies
Fr0ntierX:
“AI is reshaping the world, and security must evolve with it. By joining the incredible lineup at CoSAI, we’re excited to be part of a global effort to ensure AI continues to push boundaries safely and responsibly. We look forward to driving innovation in AI and building systems people can rely on every day without compromising their security.”
— Jonathan Begg, CEO, Fr0ntierX
Harvey:
“Trust is the most important factor to the future success of AI. From Day 1, Harvey has built its platform with the highest security standards to become a reliable AI partner for its high-performing legal clients. We are thrilled to join CoSAI to share our experience and contribute to AI security standards for everyone.”
— Winston Weinberg, CEO and Co-Founder, Harvey
HiddenLayer:
“AI has never been easier to develop, use, and implement within organizations. As deployment continues to surge, so does the need to adopt common security standards and best practices in AI security. HiddenLayer is proud to join the CoSAI in our shared mission to support the widespread adoption of AI security principles.”
— Malcolm Harkins, Chief Security & Trust Officer, HiddenLayer
Invariant Labs:
“As AI systems and agents rapidly become integral parts of any organization, addressing their security and reliability is one of the key challenges blocking wider adoption. At Invariant Labs, we are proud contributors to open-source AI security, and we are excited to join the CosAI ecosystem and democratize secure AI together.”
— Mislav Balunovic, Co-Founder and CTO, Invariant Labs
Lasso Security:
“LLM and GenAI technologies are now non-negotiable assets for businesses striving to maintain a competitive advantage. However, as GenAI deployment accelerates, organizations must prioritize security standards and best practices to ensure safe and responsible use. At Lasso Security, we are proud to lead the way in securing GenAI deployments and to join CoSAI in our shared goal of protecting organizations from existing and emerging threats.”
— Elad Schulman, CEO & Co-Founder, Lasso Security
Legit Security:
“As AI grows more integral to how we build and deploy software, ensuring the security and integrity of AI systems throughout the software development lifecycle is more urgent than ever. Legit Security is proud to join CoSAI in advancing the security standards for AI systems and infrastructure, enabling organizations to innovate with confidence, safeguarded against emerging AI threats. Together, we will drive forward a secure future for AI in software development.”
— Liav Caspi, Co-Founder and CTO, Legit Security
Logitech:
“Logitech is proud to join CoSAI in its mission to enhance AI security. As a leader in developing human-centric technologies, we recognize the importance of ensuring that AI is developed and deployed responsibly. We are committed to collaborating with CoSAI and industry partners to create a safer and more secure AI ecosystem.”
— Nabil Hamzi, Head of Product Security, Logitech
Mozilla:
“Mozilla has contributed for decades in security and privacy and that is evident within the standards and protocols that we all use today. We’re proud to support OASIS and CoSAI’s mission in making AI safe and secure. We can’t wait to collaborate in building new and innovative technologies in making AI trustworthy in an open and transparent way.”
— Saoud Khalifah, Director from Mozilla
Styrk AI:
“The partnership with CoSAI allows Styrk to further its mission of enabling safe and secure usage of AI for all. CoSAI’s community focused efforts both in standardization and research in the critical area of AI security fills a very timely need and Styrk will now be able to leverage the platform to contribute back to the community while democratizing AI adoption.”
— Vilayannur Sitaraman, CTO of Styrk AI
TrojAI:
“TrojAI is committed to developing comprehensive security solutions to safeguard AI applications and models from evolving threats. We are thrilled to join CoSAI in their mission of advancing the security and trustworthiness of AI systems to ensure the responsible and secure deployment of AI. Together, we will set new standards for AI integrity and security.”
— Lee Weiner, CEO, TrojAI
VE3:
“VE3 is proud to support CoSAI’s mission to establish a global framework for AI safety and security. As a pioneer in AI development, we recognize the paramount importance of advanced AI security and governance. Joining CoSAI represents our commitment to advancing security in AI development and implementation.”
— Manish Garg, Managing Director, VE3
About CoSAI:
CoSAI is an open ecosystem of AI and security experts from industry-leading organizations dedicated to sharing best practices for secure AI deployment and collaborating on AI security research and product development. CoSAI operates under OASIS Open, the international standards and open source consortium.
Media inquiries: communications@oasis-open.org
The post OASIS Coalition for Secure AI Welcomes EY, Protect AI, Trend Micro, and Zscaler as Newest Premier Sponsors appeared first on OASIS Open.
We’ve got another sponsor spotlight episode of the identity at the center podcast for you this week. We talked to Marta Nappo of Panini about their role in the identity verification space and how they are addressing that need for their customers. You can learn more by watching the episode at https://www.youtube.com/watch?v=Ekak4H6ccss or listening in your podcast app. Learn more about Panini at panini.com
Our new Mozilla Foundation-funded Friends of the Earth ‘Green Screen’ project has the express aim of developing a set of AI principles that the climate movement can use. The project involves desk research and a gathering of experts to influence and contribute to these principles, creating a co-designed starter for ten that others can build upon. We will then take what we’ve learned to report for the Friends of the Earth policy site.
Part of this involves setting up an online roundtable to gather insights from a diverse range of experts. In our project kickoff call last week, we realised that clarifying the ambitions and aims of such an event is something we do instinctively.
We’re big fans of community calls, but the roundtable we will be putting together is something slightly different. In this post, we’re going to give you a few things to think about when you’re gathering people together to co-design a policy or set of best practices — or when you’re more on the development side of the continuum.
cc-by-nd Bryan Mathers for WAO The Development-Engagement continuumFirst off, there’s tons of value in getting people together and working collaboratively towards something. There is also a lot of nuance in such an endeavour, so it’s best to understand what your long term goal for the project might be. We like to determine where on a continuum between ’development’ and ’engagement’ a particular project might sit.
Development is the side of the continuum that focuses on the final output of the project. This could be, for example, a report, article, or set of recommendations. Engagement, on the other hand, can serve as a launchpad for building community. While there may be outputs along the way, the main goal is to find and engage with people who are interested in a particular topic. As it happens, we’ve written extensively about how to build a community of practice in this short (and free!) resource.
If you’re mostly focusing on development, as we are with our Friends of the Earth (FoE) roundtable, you will need a different kind of preparation and facilitation than if you’re focused on a longer term community- building initiative.
Of course, many projects have an eye on both the short term and the longer term, and so are looking to do development and engagement. However, it’s important to understand that community building requires designated moderation, facilitation and a place to interact. If there is no one to actively manage and engage the community, it can become stagnant!
Co-designing for DevelopmentIt’s important to note that every collaborative effort does not need to lead to a fully fledged community. For example, with our FoE Green Screen project, the focus is very much on the set of principles that other organisations can build upon.
If you find yourself looking to engage a group of people around a particular project, like policy development or a set of principles, you should have a think about how your co-design event fits into the “project management triangle”.
Let’s take each point in turn.
PM triangle cc-by Laura Hilliger for WAO 1. FundingHow much funding is available for the co-design piece of your project? Can you afford to pay people to participate?
If you can pay people for their input, you absolutely should. Even a small portion of your overall budget can work, offering people a one-time fee or goodwill payment shows them that you value their contribution. However, we know that sometimes remuneration is not possible.
If there isn’t enough funding to pay people for their time, make sure you plan to spotlight their contribution in other ways. You can issue badges to contributors, ensure all contributors are named in final outputs and publicly thank people when you share the final outputs.
cc-by-nd Bryan Mathers for WAO 2. ScopeBased on the funding available, what is the scope of the collaboration? How can you ensure that you are building collaboratively?
Will you host a single event? You might also have three events or five of them. You might set a six month timeline for your project. Be clear about what you’re asking of people as you are asking them to participate. Are you inviting them to the first of a series of events or a singular event? Is there prep work or ‘homework’ involved? How will their contributions be attributed?
Together with other contributors, you’ll need to establish procedures to receive feedback and inputs, as well as how you will process them to create iterations. As always, documentation goes a long way and writing openly about the scope and decisions made along the way will help contributors understand the plan.
3. TimeHow much time and effort can you ethically ask others to put in? How much time are you putting in?
Depending on your budget, you’ll need to figure out how much time you need to complete your policy, principles, best practices or whatever the output actually is. You’ll also want to think about how much time you require from the people you’d like to invite.
If you are looking for open contributions to your project, it’s best to try to minimise the amount of time other people will need to participate. Help contributors give good insight quickly by asking specific questions and giving people the opportunity to give feedback via email or a voice text.
Bringing things togetherThe three sides of the project management triangle play together to shape your co-design event. Depending on your parameters, it might make sense to do some of the development work on your own and then ask for input and feedback. Alternatively, you might want to get everyone involved from the beginning and co-design the entire project through a series of events. Be adaptable and flexible as you begin to work with others and refer back to the 5 principles of open to remind yourself of what it means to work openly.
Knowing where you sit on the ‘development/engagement’ continuum and mapping out your funding, scope and time will help you plan a codesign event that leads to great outputs and strengthened relationships.
Need help figuring out how to design your co-design initiative? Get in touch!
How to build a co-design event was originally published in We Are Open Co-op on Medium, where people are continuing the conversation by highlighting and responding to this story.
The post Sertifier Joins Velocity Network Trust Framework appeared first on Velocity.
Husnan Bajwa, Beyond Identity
Josh Cigna, Yubico
Jing Gu, Beyond Identity
For enterprises that have implemented a second factor, such as SMS OTP, to mitigate the risk of credential stuffing, this paper will provide guidance on displacing passwords + OTP with passkeys.
AudienceThis white paper is intended for ISRM and IT staff tasked with deploying and maintaining multi-factor authentication (MFA) sign-in processes.
1. IntroductionMany enterprises aiming to secure their workforces have deployed SMS and application-based one-time passcodes (OTPs) as an additional factor of authentication to passwords. This whitepaper is aimed at these organizations that are now considering a move to FIDO authentication with passkeys. While this whitepaper focuses on OTPs specifically, the discussion and recommendations can be generalized to any system using phishable second factors including application-based OTP and push notifications.
This whitepaper compares OTPs as an additional authentication factor to passwords and passkeys in terms of security, user experience and ease of deployment. And it provides general guidance about migrating from OTPs to passkeys in order to improve user experience while strengthening the organization’s overall security posture. The guidance within this paper will cover key steps for moving towards passkeys, including planning, use case identification and documentation, pilot considerations, as well as deployment and training guidance. This document targets low-assurance use cases, including internal authentication, external authentication and third party and B2B authentication strategies. Given that organizations typically implement OTPs as the second factor to passwords, for this document all references to OTPs should be assumed as being used as a second factor to passwords.
This document will not cover specific vendor technologies or implementations. For guidance on moderate or high assurance use cases please refer to additional whitepapers published by the FIDO Alliance [1]. As this document is a part of a series of whitepapers, it is recommended that the reader start with the introductory document [2].
download the white paper 2. OTP MFA vs Passkeys: Why Choose PasskeysPasskeys offer several benefits to security, user experience, and ease of deployment when compared to OTPs.
2.1 Security
OTP-based MFA has been widely adopted to mitigate the risk of credential stuffing and reuse. SMS and authenticator application-based OTP are the most commonly deployed solutions due to their relative low-cost and ease of deployment across a broad set of users and use cases. The relative simplicity of this type of MFA, however, leaves it vulnerable to social engineering and many MFA bypass toolkits, because no bidirectional communication exists between the secrets generator and the validating identity provider (IDP), meaning that an OTP can be intercepted and used by a third party without the knowledge of the end user or IDP.
Additionally, OTP-based MFA requires trust in a device that an organization may not manage nor have visibility into its security posture. This means that organizations are relying on end-users to maintain the security of their own device and their ability to discern phishing attempts. While user training can help to address some of these attacks, historic guidance like checking for secure connections and familiar URLs, still relies on an ever-vigilant user base.
Passkeys provide phishing-resistant, replay-resistant sign-ins that reduce the cognitive load on users and strengthen organizations’ overall security posture. This is achieved because passkeys implement a cryptographic challenge-response protocol scoped to the relying party’s domain. The authenticators then rely on secure behaviors, like biometric proofs or PINS to unlock the credentials on the authenticator while retaining a user-friendly experience. With passkeys, an organization can have a strong first-factor of authentication for passwordless scenarios OR a strong second factor for traditional MFA workflows.
2.2 User Experience
Passkeys improve the user experience over passwords and OTPs in several ways, including: Passkeys work even when there is poor cell coverage whereas SMS OTPs require mobile network connectivity. For example, a user can have wireless access on an airplane but are not permitted to use SMS. In this instance, the SMS OTP cannot be delivered whereas passkeys can be used to authenticate. AutoFill UI enables seamless integration within browsers on mobile devices and on desktops. Up to four times faster login, no need to wait for code delivery [3] Protection against mis-keyed passwords and codes Passkeys build on common behaviors for authentication like biometric proofs (face or fingerprint).2.3 Ease of Deployment
For some micro, small, and medium sized businesses without large, dedicated support staff, end-user deployment of dedicated
authentication hardware tokens can create roadblocks. This includes both OTP hardware tokens or FIDO security Keys. Historically, the ease of deployment of SMS/App based OTPs made them a more favorable option. Procurement, logistics, and configuration are a constant battle fought by operations and IT teams. With updates to the FIDO2 ecosystem to expand the authenticator landscape, this problem is alleviated and allows the use of many different devices as passkey stores.
FIDO authentication with passkeys has been embraced by operating system (OS) and browser providers. This existing OS support from most major vendors means that, in most cases, existing hardware in the enterprise, such as laptops, phones, and tables, can be leveraged to deploy FIDO with passkeys without costly updates and replacements.
In some cases, enterprises use shared, single user devices such as iPads. For these use cases, a passkey stored in the integrated platform authenticator may not be appropriate, since any user of the device has access to the credential. In these cases, organizations should use roaming authenticators (hardware security keys) to generate and store passkeys for use on the shared device. This offers the same ease of use and convenience. Keep in mind, there may be an additional cost to purchase and manage these hardware keys for users. In many cases using hardware keys there may be a need to issue users a second hardware key as a backup to reduce the risk of the user being locked out of their account(s).
3. Deployment Strategy for Migration from OTP to Passkeys3.1 Identifying the Deployment Model
Planning for a successful passkey deployment requires organizations to consider the needs of the user and the computing devices they use in their role to maximize the utility of passkeys for staff. At a minimum, organizations should consider the following questions when planning a passkey deployment in order to make passkeys accessible to the broadest audience possible:
What kind of computing devices are used? Are your users working on laptops or desktop computers? Mobile devices? Tablets? Are the devices single user devices or multi-user (e.g., shared) devices? Are the devices provisioned by the company or are users using their own personal devices at work? Are there limitations on using USB ports, Bluetooth, or NFC? Are there limitations on access to the internet? Are your users commonly wearing gloves or masks which limit the use of on-device biometrics?Based on the answers to the previous questions, organizations can choose one of a few types of authenticators to store user’s passkeys. The flexibility of passkeys means that organizations can mix and match as their security posture and UX needs dictate. Other documents in this series go into more detail on each type of authenticator.
3.2 Deployment Testing
After determining the deployment model and deploying a FIDO server with applications integrated, it is recommended that organizations use pilot groups to test registration, authentication, and recovery processes (see below) with users. Then use the feedback from the pilot to improve processes and address issues raised by the pilot population before embarking on a broad rollout of passkeys.
3.3 User Experience3.3.1 Registration
Enterprises should implement a reliable registration process to ensure that users are correctly and securely associated with their passkeys, as stated in earlier FIDO whitepapers. The registration experience is critical to consider because it is a user’s first interaction with passkeys. Here are a few elements to consider when it comes to designing the registration experience:
Identity Proofing – Physically or remotely having the user prove their identity at the start of the registration process is recommended to ensure a strong, abuse resistant process. This may involve SMS OTP for the final time. Self-service registration – Users use their existing credentials to bootstrap a new passkey. Supervised registration – work with IT/helpdesk for registration. This reduces the risk associated with self-service models that are vulnerable to phishing the original creds. Pre-provisioned credentials – high effort, high assurance, but a mechanism is needed to get the credential into the user’s hands. Remote users – self-service or pre-provisioned, but a mechanism is needed to provide the PIN to the user to unlock the device the first time.3.3.2 Sign-In
The first step in designing a FIDO deployment with passkeys is to understand the user base, common work paradigms, and available devices – phones, tablets, laptops, desktops. This step is critical because enabling user-friendly workflows that work with the user’s existing devices is core to developing a successful rollout.
Common users’ environments and equivalent suggestions include:
Environments with users who primarily operate on mobile devices or tablets – Look into built-in unlock capabilities. Mixed device environments or environments that rely on a variety of SaaS tools – Leverage SSO provided by IDP or build flexible login workflows. Shared accounts – FIDO RPs can be configured to allow for more than one credential to be associated with a login. Investigate3.3.3 Recovery
Any authentication process is only as strong as its weakest point, which is why recovery processes have often been abused by attackers to compromise systems. Synced passkeys are durable credentials that can survive device loss and unintentional wiping by restoring from a passkey provider and reduce the need to perform account recovery to establish new credentials for the user. With passkeys, users are expected to lose their credentials less frequently. However, there may be cases where passkeys, or access to the passkeys, is lost, thus requiring account recovery.
For passkey implementations utilizing device-bound passkeys that cannot be backed up or recovered, account recovery should be performed using the backup authentication method, such as using a backup authenticator, to bootstrap enrollment of a new authenticator. In the event that a backup authentication mechanism is not available, organizations must provide alternative pathways to recovery. Organizations should take a risk-based approach to designing and implementing account recovery systems. The specific implementation details will vary widely depending upon organizational requirements. In general, recovery mechanisms should never depend on weaker factors than the credential that the user is trying to recover. In the case where passkeys need to be re-registered, organizations should design mechanisms, either automated or manual, to prevent the use of passkeys no longer registered to that user.
For passkey implementations where synchronized passkeys are used, be sure to document the bootstrapping/enrolment process for new devices as well as building a risk averse process (including identity proofing) for full provider account recovery or replacement. While these catastrophic events should be low, it may still be necessary to have users go through this process. Knowing the proper process ahead of time will insulate organizations against manipulations and stop work events.
For additional considerations around account recovery, please see the FIDO Alliance’s Recommended Account Recovery Practices for FIDO Relying Parties. [5]
3.4 Admin Considerations:
Monitoring of implementation and adoption metrics are critical to ensuring the success of the deployment and ensuring that the security benefits of FIDO authentication with passkeys is realized. Below are recommendations for metrics and processes that are indicative of the success of enterprise passkey migrations.
3.4.1 Monitoring and Visibility into Utilization
Admins are strongly encouraged to use groups or other segmentation structures to allow graceful transition of subsets of users and applications to passkeys. Pilot populations should be carefully constructed and should be composed of a variety of end user types and levels in the organization. Monitoring the usage of items below, both before and after the migration, will provide critical insights into the effectiveness of the program and guide important adjustments.
Device enrollment: How long did it take the user to enroll their first device? Security events: Where was the device at time of onboarding? What, if any, identity proofing approaches were used to ensure that the correct user was onboarded? If manager, IT support, or peer approval workflows were used, who provided attestation? Are there any time of day or device location anomalies that did not previously exist? User authentication: Was the user able to successfully authenticate? Are there any observable changes in their daily authentication patterns that would suggest problems or added friction? Does analysis of day-of-week and time-of-day suggest any issues? Key management: Are keys being used as expected and only from known devices? Some authenticators support device attestation which provides key provenance and assurance of the identity of the authenticator. If the source of the passkey is an important security control for your implementation, be sure to verify if your chosen authenticator solution supports this kind of attestation. How many keys are associated with an individual’s account? Normal guidance would be to expect the number of passkeys associated with a user’s account to be close to the number of devices that a user leverages. For example, if your users use Android phones & Windows laptops then you should expect to see two to three passkeys associated with a users’ account, one stored on each platform authenticator, and possibly one backup from a security key. In this scenario if an account had five to six passkeys registered, then it would be time to investigate and potentially remove excessive keys. Every organization’s definition of excessive may vary, and should be defined based on observations from their environment. Additionally, depending on your deployment, consider the number of applications that you have enabled for passkey authentication. If you deployed passkeys as credentials for an SSO integration, your users may only have one passkey per device. If you deployed passkeys on an application-specific basis, there may be one passkey per device per application. Organizations are recommended to monitor the number of keys associated with each user and use this data as context for informing passkey management. Whose keys are associated with administrative/service/break-glass accounts? In the same way that it is best practice to segregate administrative access from normal user access, generating a separate set of passkeys for administrative accounts is also recommended. If they are shared, be sure to include rotation, monitoring, and good cleanup practices. How will passkeys be removed?If an employee leaves the company or moves into a different role, their accounts should be disabled, deleted, or access should be evaluated and vetted. In situations where this is not reasonable due to legal requirements, passkeys should be promptly removed to prevent unauthorized account access as part of the disablement process. Similarly, if a user reports a device missing or stolen, any passkeys associated with those devices should also be removed. Compatibility assurance: Do any combinations of applications and endpoint platforms show unusual changes or decline in authentication events? Are all invocation methods for passkey authentication continuously functioning, including after upgrades? Next Steps: Get Started Today Enterprise organizations should consider migrating to FIDO authentication where possible. Use FIDO standards. Think about what your relying parties are supporting as well as your own enterprise security requirements. Passkeys are far more secure than traditional OTP mechanisms. Passkeys are far more secure than passwords. Look for the passkey icon on websites and applications that support passkeys.For more information about passkeys, check out the FIDO Alliance passkeys resource page [6] and the FIDO Alliance knowledge base [7].
5. AcknowledgmentsThe authors acknowledge the following people (in alphabetic order) for their valuable feedback and comments:
Dean H. Saxe, Amazon Web Services, Co-Chair FIDO Enterprise Deployment Working Group John Fontana, Yubico, Co-Chair FIDO Enterprise Deployment Working Group FIDO Enterprise Deployment Working Group Members Dirk Balfanz, Google Jerome Becquart, Axiad Vittorio Bertocci, Okta Greg Brown, Axiad Tim Cappalli, Microsoft Matthew Estes, Amazon Web Services Rew Islam, Dashlane Jeff Kraemer, Axiad Karen Larson, Axiad Sean Miller, RSA Tom Sheffield, Target Corporation Johannes Stockmann, Okta Shane Weeden, IBM Monty Wiseman, Beyond Identity Khaled Zaky, Amazon Web Services 6. Glossary of TermsPlease consult the FIDO Technical Glossary for definitions of these terms.
7. References[1] FIDO Alliance Enterprise Deployment Whitepapers – https://fidoalliance.org/fido-in-the-enterprise/
[2] FIDO Alliance Enterprise Deployment Introduction Whitepaper –
https://media.fidoalliance.org/wp-content/uploads/2023/06/June-26-FIDO-EDWG-Spring-2023_Paper-1_Introduction-FINAL.docx.pdf
[3] Forrester Report of Total Economic Impact of YubiKeys –
https://resources.yubico.com/53ZDUYE6/at/6r45gck4rfvbrspjxwrmcsr/Forrester_Report_Total_Economic_Impact_of_Yubico_YubiKeys.pdf?format=pdf
[4] High Assurance Enterprise FIDO Authentication –
https://media.fidoalliance.org/wp-content/uploads/2023/06/FIDO-EDWG-Spring-2023_Paper-5_High-Assurance-Enterprise-FINAL5.docx-1.pdf
[5] Recommended Account Recovery Practices for FIDO Relying Parties –
https://media.fidoalliance.org/wp-content/uploads/2019/02/FIDO_Account_Recovery_Best_Practices-1.pdf
[6] Passkeys (Passkey Authentication) – https://fidoalliance.org/passkeys/
[7] FIDO Alliance Knowledge Base – https://fidoalliance.org/knowledge-base/
By Eric Drury
If you’re in the digital trust space, like me you might be encouraged by the number and variety of inspiring developments—appearing on an almost weekly basis—which illustrate the momentum that responsible technology is gaining in the drive towards a more trustworthy and sustainable future.
One of the latest and more interesting initiatives comes—yet again—from the Kingdom of Bhutan in the form of Gelephu Mindfulness City (GMC). GMC is the Kingdom’s unique version of a smart city. They call it a Special Administrative Region, integrating economic growth with mindfulness, holistic living, and sustainability.
How does digital trust fit into GMC?
In October of last year, Bhutan launched the world’s first self-sovereign national digital identity system, Bhutan NDI. And as a pillar of digital public infrastructure (DPI), a digital identity system forms the basis for trusted interactions between individuals, government, and business.
The NDI system, which Bhutan is currently rolling out nation-wide, will be part of the core infrastructure that provides cutting-edge digital connectivity for GMC. As such, this is an exciting opportunity to further cement the viability of emerging SSI and other trust technologies for implementation at population scale.
Trust Over IP is proud to have contributed to Bhutan’s NDI system both formally and informally.
Drummond Reed, one of ToIP’s founding board members, provided inspiration—and a road map of sorts—for the development of SSI systems via the book he co-authored called ‘Self-Sovereign Identity – Decentralized Digital Identity and Verifiable Credentials’. Drummond and other ToIP members also provided expertise and input for Bhutan’s National Digital Identity Act, the NDI Act of Bhutan, relying heavily on the ToIP governance metamodel which was developed by the ToIP’s Governance Stack Working Group led by Scott Perry.
For an in-depth case study on the Bhutan NDI program, refer to Bhutan NDI, Digital Trust Ecosystem, written by DHI’s Pallavi Sharma and Eric Drury, co-chair of ToIP’s Ecosystem Foundry Working Group.
Trust Over IP and Gelephu Mindfulness City further align in that both espouse the intertwining of governance and technology as a core enabler of digital trust.
From a governance perspective, GMC blends robust policies that ‘build trust and accountability with mindful incentives designed to empower both residents and businesses alike to reach their fullest potential’. From a technology perspective, Gelephu Mindfulness City will be built on a foundation of ‘world-class infrastructure seamlessly integrating state-of-the-art technology with sustainable practices’.In support of GMC, the Kingdom is convening the first Bhutan Innovation Forum (BIF), a global initiative uniting international leaders, innovators, and entrepreneurs to support Bhutan’s vision of a Mindfulness City.
The BIF takes place in less than two weeks, October 1-3, 2024. ToIP members Drummond Reed and Eric Drury are honored to be invited to attend, representing the ToIP community and continuing the engagement with Bhutan’s digital trust community.
Drummond and Eric will appear on panels to promote the principles of digital trust and will share their experience and expertise on all things digital trust, including the emergence of cross-border digital trust networks.
ToIP is thrilled to once again walk the path with Bhutan towards a more sustainable, trustworthy, and equitable digital future. We look forward to sharing our learnings after the Forum so that the entire digital trust community benefits.
The post Trust Over IP Members to Participate in Bhutan Innovation Forum appeared first on Trust Over IP.
Geneva, September 16 2024
As we enter the second half of 2024, a time full of global digital governance initiatives, we're excited to share our progress at the Human Colossus Foundation from the governance perspective. This update encapsulates the recent strides of the Swiss E-ID, G20 Digital Public Infrastructure, UN Development Program, and the inaugural event for European Digital Independence.
Distributed Governance Progress updateIn the summer of 2023, the Human Colossus Foundation published Part 1 of the HCF Distributed Governance Model[1]. The model, though abstract in concept, tackles governance through the lens of the Principal-Agent problem. The core idea is to view information systems (i.e. technology) as agents serving users (i.e. humans) -whether individuals, businesses or any sovereign organisation with a decision power.
In today's hyper connected digital society, we lose control over information. It has become challenging to know what happens to the data we share and even more difficult to assess the accuracy of the data we use. Part 1 addressed this need for control and accuracy by introducing a governance framework integrating digital technology with existing (non-digital) frameworks.
While Part 1 lays the conceptual foundation, Part 2 turns theory into action. By collaborating directly with the key players in digital transformation across various sectors, we are building the technology stack that will demonstrate the crucial role of data authenticity and integrity in shaping a digital governance model. This practical approach is a prerequisite to our future work on AI governance for example.
Therefore, Part 2 develops a concept into a reality we shape through multiple projects. We are well underway and excited to share the progress here.
Swiss High Chamber adopts the E-ID
Sep.10 -Bern The Swiss parliament's high chamber (States Council) agreed to the design principles of the Swiss E-ID by a clear 43 against 1 vote[2]. Together with the low chamber (National Council), they will iron out the remaining differences and pave the way for parliamentary approval in 2024. This agreement further confirms the readiness of the legislative basis for introducing the Swiss E-ID[3].
HCF Contribution: The E-ID project, led by the Federal Department of Justice, has selected HCF's decentralised semantic architecture, Overlays Capture Architecture (OCA) [4] as a core technology for the first E-ID implementation.
As highlighted during the September 5th E-ID Participation Meeting, Swiss E-ID leverages OCA to present verifiable credentials in digital wallets securely. We are proud to be part of this significant project, and more information is available on the E-ID project's official Git-Hub repository[5].
Europe -moves Toward EU Digital IndependenceSep. 24 -Brussels Kick-off event.
A diverse group EU parliament members from different parties and leading European and international experts will gather to engage and discuss the critical building blocks for a secure, accountable digital public infrastructure. The ultimate goal is to firmly establish the objective of European Digital Independence in the next EU Commission agenda. The Human Colossus Foundation has accepted the in-person invitation to participate in this efforts.
Horizon Europe Grant No101093126
HCF Contribution: Digital independence requires technological independence. The Human Colossus develops an Open-Source distributed technology stack to implement applications based on a distributed governance model. Creating these technologies and making them accessible to everyone requires developing an ecosystem of tools to harmonise data across multiple stakeholders (possibly millions) and ensure interoperability across different jurisdictions. The Foundation is creating some of these tools as part of the digital healthcare NextGen EU Horizon project[6] with funding from the EU, Switzerland and UK to integrate sensitive health data (including genomics) in personalised medicine for cardiology. The press release of the European Society of Cardiology provides more information [7].
International -Momentum behind Digital Public InfrastructureOct. 1 to 3 -Cairo Global Summit on DPI -Digital Public Infrastructure
The Human Colossus Foundation will be present at this convening of stakeholders in the Digital Public Infrastructure (DPI) ecosystem. This event will present how the HCF Dynamic Data Economy governance model and technology stack can support DPI implementation strategies for sustainable horizontal scaling. Our approach enables effective data exchange for economic development.
HCF Contribution: In 2023, the UN Development Programme (UNDP) published a Legal Digital Entity Framework[7]. Within that framework, UNDP develops the model governance assessment for data exchanges that respect the country's (i.e. sovereign entities) existing governance and fundamental Human Rights principles. The Human Colossus Foundation is part of a UNDP Advisory Board that provides its expertise in developing this work.
Current Work at the Foundation:The above initiatives provide input for the continuous development of the core HCF technologies. They help to demonstrate the essential relevance of accurate data for digital governance. From a governance perspective, we can summarise them as follows:
The 'Ambient Infrastructure' is a reputation-based authentication system built upon the decentralised key management infrastructure KERI (Key Events Log Receipts Infrastructure). Launched in 2023 as part of the EU Horizon 2020 eSSIF-Lab project[9], the Ambient Infrastructure is now advancing in NextGen, an EU Horizon project focused on personalised cardiovascular medicine.
Version 2.0 of the OCA specification[4] supporting the OCA ecosystem. Decentralised semantics architectures are significant developments for digital governance to ensure the respect of different sovereign digital governance.
OCA Ecosystem v1.0. Community based solutions and tooling including extensions (i.e. overlays not part of the core specification). The Human Colossus Research and Technology Councils have open a dedicated focus group to collect community requirements.
This ecosystem, featuring a suite of tools and protocols, ensures consistent and interoperable data flows across multiple stakeholders and jurisdictions.
ConclusionThe Human Colossus Foundation has a busy autumn ahead. If you would like to help map a distributed governance framework into real-world applications, the HCF Research Council invites you to join its new Focus Group.
Joining this group will provide you with the opportunity to work closely with our team, share your expertise, and help shape the future of digital governance. We are also expanding our network of subject matter experts. Please get in touch with rc@humancolossus.org to learn more and express your interest.
References[1] “Distributed Governance: a Principal-Agent Approach to Data Governance -- Part 1 Background & Core Definitions”, P.Page, P.Knowles, R.Mitwicki, arXiv:2308.07280v2 [cs.CY] , Aug. 15 2023
[2] “PARLAMENT IST SICH ÜBER AUSGESTALTUNG DER E-ID IM GRUNDSATZ EINIG”, SDA
KEYSTONE-SDA-ATS AG, September 10 2024
[3] “Parliament gets closer to finalising new digital ID scheme”, Swiss Info.ch, September 10 2024
[4] “Overlays Capture Architecture: Official Resources”, ColoSSI Network website, August 2023
[5] “Specification de Design pour les preuves électroniques”, Swiss E-ID Participation-Meeting, September 9 2024
[6] “Next Generation Tools for Genome-Centric Multimodal Data Iintegration in Personalised Cardiovascular Medicine”, EU Horizon Grant Number 101136962, funded by the EU, the Swiss State Secretariat for Education, and UK Research & Innovation.
[7] “Heart patients set to receive treatment tailored to their genetic and health information”, European Society of Cardiology Press Release, February 12 2024
[8] “UNDP Model Governance Framework for Digital Legal Identity Systems“, United Nation Development Programme & Norwegian Ministry of Foreign Affairs. Link as of September 16 2024
[9] “Decentralized Key Management Infrastructure for SSI by The Human Colossus Foundation“, NGI eSSIF-Lab project July 2021
The Identity at the Center podcast was on the scene in Washington DC attending the Identity Week America conference. We had the opportunity to sit down with Ryan Galluzzo from NIST to talk about the process of updating NIST standards, assurance levels, and existential questions from Ryan’s son.
You can watch the episode here https://www.youtube.com/watch?v=NtWRrmoltQQ or listen to it in your favorite podcast app.
This is the third in a series of posts about Systems Thinking, an approach that helps us make sense of complex situations by considering the whole system rather than just the individual pieces from which it is constituted.
This series is made up of:
Part 1: Three Key Principles Part 2: Understanding Feedback Loops Part 3: Identifying Leverage Points (this post)In the first post, we laid the groundwork by exploring the foundational principles of Systems Thinking. We then explored feedback loops in the second post, examining how they drive system behaviour and contribute to stability or change.
In this concluding post, we turn our attention to Leverage Points — those critical areas within a system where a small shift can lead to profound changes. By understanding and identifying these points, you can uncover powerful opportunities for intervention, allowing you to drive meaningful and lasting change within complex systems.
1. What are leverage points?“When we must deal with problems, we instinctively refuse to try the way that leads through darkness and obscurity. We wish to hear only of unequivocal results, and completely forget that these results can only be brought about when we have ventured into and emerged again from the darkness.” — Carl Jung
Leverage points are specific areas within a system where a small change can produce a significant impact on the entire system. These points often require careful analysis and a deep understanding of the system’s structure and behaviour, as they are not always immediately obvious. Identifying leverage points can be challenging because they are often hidden beneath the surface of the system’s visible components.
For example, in the world of technology, offering short, simple, clear (e.g. no legalese) ‘Terms and Conditions’ might be a leverage point. By helping users understand exactly how your organisation uses their data, you might improve customer satisfaction, increase trust, and reduce misunderstandings — all by making it easy for people to understand your company’s policy on privacy.
Leverage points, therefore, offer a way to create meaningful change by focusing on areas where even minimal effort can lead to substantial results. Recognising these points requires stepping back and viewing the system holistically, understanding not just the individual parts but how they interact and influence each other.
2. The role of paradigms in leverage points“Inquiry is the controlled or directed transformation of an indeterminate situation into one that is so determinate in its constituent distinctions and relations as to convert the elements of the original situation into a unified whole.” — John Dewey
Paradigms are the underlying beliefs and assumptions that shape how we understand and interact with a system. These paradigms represent some of the most powerful leverage points, as changing a paradigm can fundamentally alter the way a system operates. Shifting these beliefs requires questioning what is often taken for granted and being open to new perspectives.
To go back to our Terms and Conditions example, a paradigm shift from prioritising user convenience to emphasising data privacy can serve as a transformative leverage point. Traditionally, many tech companies have focused on creating products that are easy to use, often at the expense of data privacy. However, as the paradigm shifts towards valuing privacy, we see significant changes in how technologies are designed and deployed.
This shift has been significantly driven by policy changes, such as the implementation of the European General Data Protection Regulation (GDPR). The GDPR has forced companies to rethink how they collect, store, and manage user data, prioritising privacy by design. As a result, this new perspective has encouraged the development of more secure products, the adoption of stricter data management practices, and an overall increase in consumer trust. The influence of GDPR illustrates how policy can drive paradigm shifts, leading to widespread changes not just in product development but also in corporate strategies, legal frameworks, and consumer expectations. This example demonstrates the profound impact that changing a paradigm, supported by regulatory measures, can have on an entire system.
By addressing paradigms, especially through supportive policies, you are working at the root of the system’s behaviour. Changing the underlying beliefs that drive a system can lead to widespread and lasting change, making paradigm shifts one of the most effective leverage points in Systems Thinking.
3. Small, incremental changes as leverage points“Of any stopping place in life, it is good to ask whether it will be a good place from which to go on as well as a good place to remain.” — Mary Catherine Bateson
Small, well-placed actions can serve as powerful leverage points within a system. These might involve slight adjustments in policy, processes, or resource allocation. The key is to identify where these small changes can have a disproportionately large impact, leading to significant shifts in the system without the need for extensive interventions.
For example, in the context of data privacy, implementing a small but strategic change, such as requiring explicit user consent for data sharing, can have a profound impact. A great example of this is the introduction of GDPR-compliant consent forms across digital platforms in the EU. This seemingly minor policy change has led to a significant increase in user awareness and control over their personal data, contributing to greater trust in online services. The increased transparency and control have demonstrated how a small, well-placed policy change can lead to substantial benefits for both users and organisations, highlighting the power of incremental changes at the right leverage points.
The success of such small interventions lies in their ability to trigger widespread behavioural changes without the need for drastic overhauls. By carefully identifying and implementing these changes, we can achieve meaningful and lasting impacts across various systems.
4. Communication as a leverage point“We are changed not only by being talked to but also by hearing ourselves talk to others, which is one way of talking to ourselves. More exactly, we are changed by making explicit what we suppose to have been awaiting expression a moment before.” — Geoffrey Vickers
Communication within a system can serve as a crucial leverage point. The way information is shared and understood can significantly influence the system’s behaviour, impacting everything from decision-making processes to overall system efficiency. Improving communication channels, ensuring transparency, and making sure all stakeholders are heard can lead to more effective decision-making and positive outcomes.
For instance, in our work at We Are Open Co-op, practicing open communication and transparency is central to our project management approach. By sharing progress, challenges, and decision-making processes openly with all stakeholders, we foster a culture of trust and collaboration. This openness extends to how we handle privacy; by being transparent about how we collect, use, and protect data, we build confidence among our partners and clients. Implementing tools that enable open documentation and communication, such as public repositories or shared workspaces, not only keeps everyone informed but also reinforces accountability and integrity in our practices. This small adjustment in how information is shared and protected can lead to significant improvements in project outcomes, ensuring that tasks are completed effectively while respecting privacy concerns.
The power of communication as a leverage point lies in its ability to unify and align the various components of a system. By enhancing the flow of information and ensuring that all voices are heard, you can create a more cohesive, responsive, and effective system.
5. Feedback Loops as Leverage Points“The major problems in the world are the result of the difference between how nature works and the way people think.” — Gregory Bateson
Feedback loops, as discussed in the second post, can also act as leverage points within a system. By altering how these loops operate, you can influence the system’s overall behaviour. Identifying and adjusting these loops — whether reinforcing (positive) or balancing (negative) — can be a powerful way to steer the system toward desired outcomes.
For example, in the context of data privacy, feedback loops between user behaviour and data management practices can be effectively leveraged to enhance privacy protection. By providing users with real-time feedback on how their data is being used and giving them the ability to adjust their privacy settings, you create a positive feedback loop. As users become more aware of how their data is handled, they are often motivated to take greater control over their privacy settings. This increased control leads to further trust in the platform, encouraging users to engage more actively with privacy tools. Over time, this feedback loop can significantly strengthen overall data protection practices, leading to better compliance with regulations and higher user satisfaction.
This example highlights how adjusting feedback loops can lead to substantial changes in system behaviour with relatively simple interventions. By strategically modifying these loops, you can achieve desired outcomes more effectively and sustainably.
ConclusionLeverage points offer powerful opportunities to influence complex systems with relatively small, strategic actions. By identifying where these points lie and understanding how to use them effectively, you can create meaningful and lasting change within any system. Whether you are looking to improve organisational processes, drive social change, or enhance personal decision-making, the principles of Systems Thinking can help you navigate complexity with greater clarity and purpose.
This post, along with the previous two in this series, provides an introduction to the foundational principles of Systems Thinking and their practical applications. These insights are drawn from my studies toward an MSc in Systems Thinking in Practice through the Open University, and they are just the beginning of what Systems Thinking can offer.
If you’re interested in applying these principles to your work, We Are Open Co-op is here to help you implement effective systemic interventions tailored to your unique challenges. Thank you for following this series — your journey into Systems Thinking doesn’t have to end here. Continue to explore, apply, and refine these concepts, and watch how they can transform the way you approach complex problems.
References Ackoff, R.L. (1974). Redesigning the Future: A Systems Approach to Societal Problems. New York: Wiley. Bateson, G. (1972). Steps to an Ecology of Mind. San Francisco: Chandler Publishing Company. Bateson, M.C. (1994). Peripheral Visions: Learning Along the Way. New York: HarperCollins. Beer, S. (1972). Brain of the Firm. New York: Herder and Herder. Checkland, P. (1981). Systems Thinking, Systems Practice. Chichester: John Wiley & Sons. Dewey, J. (1938). Logic: The Theory of Inquiry. New York: Holt, Rinehart, and Winston. Jung, C.G. (1957). The Undiscovered Self. New York: Little, Brown, and Co. Meadows, D.H. (2008). Thinking in Systems: A Primer. White River Junction: Chelsea Green Publishing. Vickers, G. (1965). The Art of Judgement: A Study of Policy Making. London: Chapman & Hall.An Introduction to Systems Thinking was originally published in We Are Open Co-op on Medium, where people are continuing the conversation by highlighting and responding to this story.
Public review ends October 14th
OASIS and the KMIP TC are pleased to announce that KMIP Version 3.0 and KMIP Profiles Version 3.0 are now available for public review and comment.
The OASIS KMIP TC works to define a single, comprehensive protocol for communication between encryption systems and a broad range of new and legacy enterprise applications, including email, databases, and storage devices. By removing redundant, incompatible key management processes, KMIP will provide better data security while at the same time reducing expenditures on multiple products.
The documents and all related files are available here:
Key Management Interoperability Protocol (KMIP) Version 3.0
Committee Specification Draft 01
23 August 2024
Editable Source: https://docs.oasis-open.org/kmip/kmip-spec/v3.0/csd01/kmip-spec-v3.0-csd01.docx (Authoritative)
HTML: https://docs.oasis-open.org/kmip/kmip-spec/v3.0/csd01/kmip-spec-v3.0-csd01.html
PDF: https://docs.oasis-open.org/kmip/kmip-spec/v3.0/csd01/kmip-spec-v3.0-csd01.pdf
For your convenience, OASIS provides a complete package of the specification document and any related files in a ZIP distribution file. You can download the ZIP file at: https://docs.oasis-open.org/kmip/kmip-spec/v3.0/csd01/kmip-spec-v3.0-csd01.zip
Key Management Interoperability Protocol (KMIP) Profiles Version 3.0
Committee Specification Draft 01
30 November 2024
Editable Source: https://docs.oasis-open.org/kmip/kmip-profiles/v3.0/csd01/kmip-profiles-v3.0-csd01.docx (Authoritative)
HTML: https://docs.oasis-open.org/kmip/kmip-profiles/v3.0/csd01/kmip-profiles-v3.0-csd01.html
PDF: https://docs.oasis-open.org/kmip/kmip-profiles/v3.0/csd01/kmip-profiles-v3.0-csd01.pdf
Test Cases: https://docs.oasis-open.org/kmip/kmip-profiles/v3.0/csd01/test-cases/
For your convenience, OASIS provides a complete package of the specification document and any related files in a ZIP distribution file. You can download the ZIP file at: https://docs.oasis-open.org/kmip/kmip-profiles/v3.0/csd01/kmip-profiles-v3.0-csd01.zip
How to Provide Feedback
OASIS and the KMIP TC value your feedback. We solicit input from developers, users and others, whether OASIS members or not, for the sake of improving the interoperability and quality of its technical work.
The public review starts September 13, 2024 at 00:00 UTC and ends October 14, 2024 at 23:59 UTC.
Comments from TC members should be sent directly to the TC’s mailing list. Comments may be submitted to the project by any other person through the use of the project’s Comment Facility: https://groups.oasis-open.org/communities/community-home?CommunityKey=2b5e5c66-cc41-4aa5-92ee-018f5aa7dfc4
Comments submitted for this work by non-members are publicly archived and can be viewed by using the link above and clicking the “Discussions” tab.
Please note, you must log in or create a free account to see the material. Please contact the TC Administrator (tc-admin@oasis-open.org) if you have any questions regarding how to submit a comment.
All comments submitted to OASIS are subject to the OASIS Feedback License, which ensures that the feedback you provide carries the same obligations at least as the obligations of the TC members. In connection with this public review, we call your attention to the OASIS IPR Policy [1] applicable especially [2] to the work of this technical committee. All members of the TC should be familiar with this document, which may create obligations regarding the disclosure and availability of a member’s patent, copyright, trademark and license rights that read on an approved OASIS specification.
OASIS invites any persons who know of any such claims to disclose these if they may be essential to the implementation of the above specification, so that notice of them may be posted to the notice page for this TC’s work.
Additional information about the specification and the KMIP TC’s can be found at the TC’s public home page: https://www.oasis-open.org/committees/kmip/
Additional references:
[1] https://www.oasis-open.org/policies-guidelines/ipr/
[2] http://www.oasis-open.org/committees/kmip/ipr.php
Intellectual Property Rights (IPR) Policy
The post Invitation to comment on two KMIP specifications appeared first on OASIS Open.
1. What is the mission and vision of IndyKite?
Backed by leading venture firms and based in San Francisco, IndyKite is building a new category of data management and digital identity services by capturing, connecting and controlling data across the enterprise and surrounding ecosystem. With an identity-centric approach to data, IndyKite enables companies to achieve higher trust in their data products, AI and applications with enhanced visibility, data governance and granular access controls. Leveraging knowledge graph technology and machine learning, IndyKite delivers a powerful operational data layer to enable developers with flexible APIs through a growing open-source ecosystem. Learn more at [www.indykite.com](http://www.indykite.com/)
2. Why is trustworthy digital identity critical for existing and emerging markets?
Digital identity is a core enabler of applications and services and will only become more important in the future. Digital identity not only applies to humans, but also to all devices, applications, systems, AI, digital products and even individual data points. Securing these identities is paramount, but even more important, is understanding how they drive and enable user experience, functionality and data mobility across the organization. At IndyKite, we see digital identity as the starting point for enabling businesses to build modern solutions, deliver incredible customer experiences and ensure trustworthy AI.
3. How will digital identity transform the Canadian and global economy? How does your organization address challenges associated with this transformation?
Modern organizations around the world are undermined by siloed data and disconnected identities. The advent of AI tools is increasing pressure for leaders to address data challenges in the organization to ensure future viability. IndyKite enables organizations to capture, connect and control their data in a flexible and dynamic way, to drive better decisions, security, machine learning and AI and solve challenges.
4. What role does Canada have to play as a leader in this space?
Canada holds a place of global influence as a leading voice in many sectors. By pioneering a secure and accessible digital identity framework, Canada has ensured the sustainability of its modern economy into the future and created a blueprint for other nations to follow.
5. Why did your organization join the DIACC?
As digital identity is a an essential part of the future, it needs modern approaches and frameworks that enable innovation, without being restricted by legacy thinking. DIACC is an ideal forum for public and private sector leaders to discuss, design and accelerate these approaches to ensure digital trust into the future.
6. What else should we know about your organization?
Powered by graph technology, the IndyKite platform increases visibility, trust and control of your data. This enables data pipelines to get the right data, to the right place and in the right context to drive enhanced product development and new revenue channels. It also enables the secure sharing of data beyond the bounds of your organization, and better customer journeys with native identity workflows. More details can be found at www.indykite.com
Imagine a world where combating climate change and protecting the environment is integrated into every step of global trade.
In this episode, hosts Reid Jackson and Liz Sertl are joined by Lea-Ann Bigelow, Director of Green Trade at U.S. Customs and Border Protection (CBP). With a wealth of experience in environmental regulation and sustainability, Lea-Ann shares how U.S. Customs is evolving to meet the challenges of climate change through innovative trade practices.
Lea-Ann discusses how CBP's efforts are not just about regulating imports but about leading the charge in reducing emissions, enhancing traceability, and fighting environmental crimes. By integrating sustainability into the global supply chain, these initiatives are paving the way for a cleaner, safer world.
In this episode, you’ll learn:
How U.S. Customs and Border Protection is pioneering green trade practices to reduce emissions and enhance sustainability across global supply chains.
The implementation of advanced traceability systems to combat environmental crime and ensure compliance in international trade.
The collaborative strategies between government and industry that are shaping a more resilient and environmentally responsible future for global trade.
Jump into the Conversation:
[00:00] Introducing Next Level Supply Chain
[00:40] Lea-Ann Bigelow Discusses Green Trade Initiatives at CBP
[01:23] The Dark Side of Trade: Environmental Crime and Its Ties to Other Offenses
[01:52] Why GS1 Connect Is Crucial for Environmental Compliance
[02:55] The Role of GS1 Standards in Enhancing Global Trade Compliance
[04:41] Developing the Green Standard for Global Trade
[04:50] International Collaboration on Environmental Regulations
[05:31] Navigating Complex Global Environmental Regulations
[06:02] Closing Thoughts: The Future of Green Trade
Connect with GS1 US:
Our website - www.gs1us.org
Connect with the guests:
Lea-Ann Bigelow on LinkedIn
Public Review Ends October 11th
OASIS and the LEXIDMA TC are pleased to announce that DMLex V1.0 CSD04 is now available for public review and comment.
DMLex is a data model for modelling dictionaries (here called lexicographic resources) in computer applications such as dictionary writing systems. DMLex is a data model, not an encoding format. DMLex is abstract, independent of any markup language or formalism. At the same time, DMLex has been designed to be easily and straightforwardly implementable in XML, JSON, NVH, as a relational database, and as a Semantic Web triplestore.
The documents and all related files are available here:
LEXIMDA Data Model for Lexicongraphy (DMLex) V1.0
Committee Specification Draft 04
06 September 2024
Editable Source (Authoritative):
https://docs.oasis-open.org/lexidma/dmlex/v1.0/csd04/dmlex-v1.0-csd04.pdf
HTML:
https://docs.oasis-open.org/lexidma/dmlex/v1.0/csd04/dmlex-v1.0-csd04.html
Schemas:
XML: https://docs.oasis-open.org/lexidma/dmlex/v1.0/csd04/schemas/XML/
JSON: https://docs.oasis-open.org/lexidma/dmlex/v1.0/csd04/schemas/JSON/
RDF: https://docs.oasis-open.org/lexidma/dmlex/v1.0/csd04/schemas/RDF/
Informative copies of third party schemas are provided:
https://docs.oasis-open.org/lexidma/dmlex/v1.0/csd04/schemas/informativeCopiesOf3rdPartySchemas/
For your convenience, OASIS provides a complete package of the specification document and any related files in ZIP distribution files. You can download https://docs.oasis-open.org/lexidma/dmlex/csd04/dmlex-v1.0-csd04.zip
How to Provide Feedback
OASIS and the LEXIDMA TC value your feedback. We solicit input from developers, users and others, whether OASIS members or not, for the sake of improving the interoperability and quality of its technical work.
The public review starts September 10, 2024 at 00:00 UTC and ends October 11, 2024 at 23:59 UTC.
Comments may be submitted to the project by any person through the use of the project’s Comment Facility: https://groups.oasis-open.org/communities/community-home?CommunityKey=b7061122-77c2-424a-8859-018dce26037f
Comments submitted for this work by non-members are publicly archived and can be viewed by using the link above and clicking the “Discussions” tab.
Please note, you must log in or create a free account to see the material. Please contact the TC Administrator (tc-admin@oasis-open.org) if you have any questions regarding how to submit a comment.
All comments submitted to OASIS are subject to the OASIS Feedback License, which ensures that the feedback you provide carries the same obligations at least as the obligations of the TC members. In connection with this public review, we call your attention to the OASIS IPR Policy [1] applicable especially [2] to the work of this technical committee. All members of the TC should be familiar with this document, which may create obligations regarding the disclosure and availability of a member’s patent, copyright, trademark and license rights that read on an approved OASIS specification.
OASIS invites any persons who know of any such claims to disclose these if they may be essential to the implementation of the above specification, so that notice of them may be posted to the notice page for this TC’s work.
Additional information about the specification and the LEXIDMA TC’s can be found at the TC’s public home page: https://www.oasis-open.org/committees/lexidma/
Additional references:
[1] https://www.oasis-open.org/policies-guidelines/ipr/
[2] http://www.oasis-open.org/committees/lexidma/ipr.php
Intellectual Property Rights (IPR) Policy
Intellectual Property Rights (IPR) Policy
The post Invitation to comment on LEXIDMA DMLex Version 1.0 – CSD04 appeared first on OASIS Open.
By Atsuhiro Tsuchiya, APAC Market Development Sr. Manager
In June, Google and the FIDO Alliance hosted a highly successful event in Tokyo that brought together innovative minds from various universities and companies. The event was marked by a high level of participation and competition, showcasing the latest advancements in authentication technologies.
Event Highlights:
High Participation: The event saw an impressive turnout, with around 200 participants from 25 different universities and companies. Cutting-Edge Innovations: Participants showcased groundbreaking solutions aimed at enhancing security and convenience in authentication processes. Technical Workshops: Engaging workshops provided a platform for sharing practical knowledge and experiences.Key Features:
Real-World Implementations: A notable aspect of this event was the participation of teams focused on implementing their solutions in real-world services. This added a layer of practicality and relevance, making the event highly impactful. High-Level Competition: The level of competition was exceptionally high, reflecting the advanced state of current research and development in the field. In particular, the teams from universities all proposed a high level of implementation.Awards and Recognition:
Grand Winner: Keio University SFC-RG pkLock team (Keio University)
The team developed an innovative authentication system for IoT space that combines security and user convenience, making it a standout solution in the competition.
FIDO Award 1: SKKN (Waseda University)
This team was recognized for their advanced authentication technology that promises to enhance security in various applications.
FIDO Award 2: TOKYU ID (Tokyu)
Their solution focused on integrating authentication technologies into everyday services, demonstrating practical and scalable applications.
Google Award: Team Nulab (Nulab)
The team impressed with their user-friendly authentication app that combines multiple security features to provide a seamless user experience.
What We Learned from the Event:
Collaboration is Key: The event underscored the importance of collaboration between academia and industry. By working together, we can accelerate the development and implementation of innovative authentication solutions.
Focus on User Experience: Many of the successful solutions emphasized the need for a seamless and user-friendly experience. Security should not come at the expense of convenience.
Scalability and Practicality: Solutions that can be easily integrated into existing systems and scaled to meet the needs of various applications are crucial for widespread adoption.
Continuous Innovation: The rapid advancements in authentication technologies highlight the need for continuous innovation and adaptation to stay ahead of emerging threats.
Acknowledgements: We would like to extend our heartfelt thanks to the tutors from the Japan Working Group for their invaluable support and guidance throughout the event. Their expertise and dedication were instrumental in making this event a success.
These award-winning solutions highlight the diverse approaches and innovative thinking that are driving the future of authentication technologies. Each team demonstrated a unique blend of creativity, technical expertise, and practical application, making this event a true showcase of excellence in the field.
The FIDO Alliance is excited to share the outcomes of this event and looks forward to continuing to support and foster innovation in authentication technologies. To learn more about the background and details, please read the full event report, Passkeys Hackathon Tokyo event report.
It’s time for another episode of the Identity at the Center podcast! This week, Jim McDonald sat down with Deneen DeFiore, Chief Information Security Officer at United Airlines. They discuss her career journey in identity and access management, with a focus on her past experiences at General Electric and her current initiatives at United Airlines. The conversation emphasizes the importance of integrating customer identity with enhanced trust and personalized experiences, the significance of team building and professional growth, and balancing business and technical expertise in leadership.
Watch the episode here: https://www.youtube.com/watch?v=KVmYRDv7jHM
The post The SLAP – Requirements for Practical Verifiable Credentials appeared first on Velocity.
We’ve got another episode of The Identity at the Center podcast for you this week! This one is sponsored by Zilla Security. We spoke with Nitin Sonawane of Zilla Security about disrupting the identity security and governance space with innovative solutions such as Zilla Universal Sync (ZUS) and how AI and ML can streamline and enhance access reviews and compliance.
Watch the episode here: https://www.youtube.com/watch?v=QLiSUgYyZwU
You can learn more about their Zilla at https://zillasecurity.com/
We had a great conversation with Andrew Shikiar from the FIDO Alliance on the latest episode of the Identity at the Center podcast. We dove into the world of authentication, covering everything from different use cases to the importance of passkeys and regional adoption trends. We also got the inside scoop on Authenticate 2024, a can't-miss event for anyone in the identity space. Andrew also announces a new discount code just for IDAC fans to the FIDO Alliance Shop to get yourself some swag!
Check out the episode and let us know what you think:
https://youtu.be/quY-pEDa_5Y?si=RN9wfYnJBD1kc98e
Don’t forget about our discounts!
Authenticate Conference - Use code **IDAC15** for 15% off: https://authenticatecon.com/event/authenticate-2024-conference/
FIDO Alliance Shop - https://shop.fidoalliance.org/ - Use code **IDAC10** for a discount on your purchase!
Most of us today are accustomed to unlocking our smartphones with a simple glance or touch. In the blink of the tech industry’s eye, biometric authentication has quickly become a normal part of our daily lives.
Consumers love the convenience and security of biometrics, which has helped propel its growth and mainstream adoption. In the FIDO Alliance’s last global barometer survey, biometrics ranked top as the most secure and the preferred way to log in by consumers.
But for biometrics to continue its success, there is a reputation issue and ‘elephant in the room’ that is holding back consumers, governments, and other implementers alike from full trust and confidence: bias.
Are biometric technologies biased?Concerns have been circulating for some time about the accuracy of biometric systems in processing diverse demographics. In the UK in 2021, for example, Uber drivers from diverse ethnic backgrounds took legal action over claims its software had illegally terminated their contracts as its software was unable to recognize them.
In the FIDO Alliance’s recent study, Remote ID Verification – Bringing Confidence to Biometric Systems Consumer Insights 2024, consumers made clear that they are concerned about bias in biometric facial verification systems.
While over half of respondents indicated they believe face biometrics can accurately identify individuals (56%), others in the survey report a different experience.
A quarter of respondents felt they had been discriminated against by biometric face verification systems (25%).
Organizations like NIST have been closely monitoring the disparities in bias performance for some time – with NIST’s most recent evaluation of solutions across different demographics released this year. The headline is: Not all biometric systems are created equal.
As face verification has become adopted globally, the accuracy in identifying diverse demographics has gone from weakness to strength, with most leading solutions today operating with extremely small margins of error. However, less sophisticated solutions do exist and are perpetuating a far bigger reputational and adoption challenge.
Inclusivity and accessibility in remote identityInclusivity is just one part of the problem. Bias impacts the entire user experience and erodes faith in the technology overall. Half of American and British consumers in the survey said they would lose trust in a brand or institution if it were found to have a biased biometric system, and 22% would stop using the service entirely.
Remote identity solutions unlock huge benefits for governments, organizations, and consumers alike. Consider already how many more scenarios where we are asked to prove who we are virtually today – starting a new job, opening a bank account, signing legal documents. And, as outlined earlier, we know consumers already love using biometrics – 48% of those we surveyed preferred biometrics to enroll and verify themselves remotely.
However, the excitement of more remote identity solutions is understandably mixed with these bias concerns, causing some organizations to delay or reconsider implementation. We’re in an age where digital inclusivity is highly scrutinized, especially for public services, and governments are increasingly calling for a way to demonstrate equity.
Equitable biometrics systems are both a practical and a moral imperative. So how do we get there?
Addressing bias in biometric systemsThe FIDO Alliance has launched its Face Verification Certification program, with mitigating bias as a key priority. It assesses a face verification system’s performance across different demographics, including skin tone, age, and gender, in addition to far more wide-reaching security and performance tests.
Why is independent certification for biometrics important?Currently, testing levels are completed on a case-by-case basis, per organization. This means it’s expensive and time-consuming, and what ‘good’ looks like varies widely. The FIDO Alliance’s program is based on proven ISO standards and has been developed by a diverse, international panel of industry, government, and subject matter experts. This means it is unrivaled in its ability to set equitable performance benchmarks.
More broadly, certification and independent global testing catalyze innovation and technological adoption. Whether launching an identity verification solution or including it in related regulations, open standards and certification set a clear performance benchmark. It removes considerable duplicated efforts, improves the confidence of all stakeholders, and ultimately drives up the performance of all solutions on the market.
How is bias evaluated?At this time, the FIDO Alliance program considers false reject rate (FRR) for bias, using FRR methodology, and is measured at the transaction level across skin tone, age, and gender. ISO 19795-10 has multiple options for measuring differential performance. One option is described in the Section: Reporting differential performance against a benchmark (Section 7.4.2). In this approach, testers seek to compare the performance of one or more demographic groups to a specific benchmark. FIDO has chosen this approach given the small sample size of the individual groups (50+ per group). For skin tone, groups are defined and distributed across three brackets based on the Monk Scale. For gender, groups are defined and distributed across male, female, and other. For age, groups are defined and evenly distributed across four age brackets.
The benchmarks are set at 6% (95% confidence interval), based on bootstrapping simulations. These simulations covered a spectrum of scenarios, population sizes, correlation between attempts. The benchmark chosen reduces the probability that a group will be considered different when it actually is not, i.e., finding a difference by chance (<5%).
What is the value of certification for Biometric Vendors? Independent validation of biometric performance Opportunity to understand gaps in product performance to then improve and align with market demands Demonstrate product performance to potential customers Improve market adoption by holding an industry-trusted certification Leverage one certification for many customers/relying parties Benefit from FIDO delta and derivative certifications for minor updates and extendability to vendor customers Reduce need to repeatedly participate in vendor bake-offs What is the value of certification for Relying Parties? One-of-a-kind, independent, third-party validation of biometric performance assessing accuracy, fairness and robustness against spoofing attacks Provides a consistent, independent comparison of vendor products – eliminating the burden of maintaining own program for evaluating biometric products Accelerates FIDO adoption to password-less Commitment to ensure quality products for customers of the relying parties Requirements developed by a diverse, international group of stakeholders from industry, government, and subject matter experts Conforms to ISO FIDO Annex published in ISO standards What is the value of certification with FIDO accredited laboratories?FIDO Accredited Laboratories are available worldwide and follow a common set of requirements and rigorous evaluation processes, defined by the FIDO Alliance Biometrics Working Group (BWG) and follow all relevant ISO standards. These laboratories are audited and trained by the FIDO Biometric Secretariat to ensure lab testing methodologies are compliant and utilize governance mechanisms per FIDO requirements. Laboratories perform biometric evaluations in alignment with audited FIDO accreditation processes. In contrast, bespoke, single laboratory biometric evaluations may not garner sufficient trust from relying parties for authentication and remote identity verification use cases.
What are the other ISO Standards that FIDO certification conforms to?In addition to ISO/IEC 19795-10, vendors and their accredited lab are adhering to the following ISO standards:
TerminologyThe FIDO Alliance continues to champion the cause of combating bias and enhancing security measures in remote biometric identity verification technologies through its Identity Verification and Biometric Component certifications. The FIDO Certification Programs offer reliability, security, and standardization to certify biometric solutions for remote identity verification, and has specifically set benchmarks for face verification technologies to test for bias.
In addition to the Face Verification program, the FIDO Alliance emphasizes the importance of rigorous testing and certification processes in ensuring that identity verification solutions are trustworthy and secure, including the Document Authenticity (DocAuth) Certification. These programs offer solution providers the opportunity to differentiate themselves in the market by leveraging FIDO’s independent, accredited test laboratories and industry-recognized brand.
Learn More about FIDO Biometric CertificationsAs digital identity verification landscapes evolve, the demand for independently verified and unbiased biometric systems becomes increasingly vital. The introduction of the FIDO Alliance’s Face Verification Certification Program reinforces the commitment of solution providers to proactively address trust, security, and inclusivity in biometric identity verification technologies.
To learn more, download the in-depth consumer research on remote ID verification here, and discover the certified providers backed by FIDO certification to stay ahead with secure and trustworthy biometric identity verification technologies.
By LunarCrush and OriginTrail
In the rapidly evolving world of tech and finance, the demand for innovation and adaptability is higher than ever, driven by a quest for transparency for internet users. LunarCrush has been at the forefront of Social Intelligence, converting human-driven insights into actionable information for both retail and institutional stakeholders. Originally focusing on the crypto industry, LunarCrush’s Social Intelligence now extends across diverse sectors such as technology, politics, travel, music, and more. Recognizing the convergence of crypto, the Internet, and Artificial Intelligence (AI), LunarCrush is making a significant leap forward in their transparency efforts through social intelligence. By launching the Social Intelligence Paranet on the OriginTrail Decentralized Knowledge Graph (DKG), LunarCrush aims to enhance content collection through incentivized crowdsourcing and enable the creation of AI-powered services on this trusted knowledge base.
The Decentralized Knowledge Graph and the Social Intelligence ParanetThe Social Intelligence Paranet will operate on the OriginTrail DKG, a permissionless peer-to-peer network that ensures all social content published to the Paranet is discoverable, verifiable, and attributed to its owners. This setup allows AI services leveraging this knowledge base to avoid challenges like hallucinations, managed bias, and intellectual property violations. For an in-depth understanding of the technical design of paranets, DKG, and decentralized Retrieval-Augmented Generation (dRAG), we recommend reviewing the OriginTrail Whitepaper.
The Social Intelligence Paranet InitiativeAligned with LunarCrush’s growth trajectory, the Social Intelligence Paranet will initially target the crypto sector, attracting high-quality content creators and community members from various crypto projects. LunarCrush will also mine knowledge tied to their social insights, such as Alt Rank, Top Creators, and Sentiment analysis. Beyond knowledge mining, the Social Intelligence Paranet will feature the first AI-powered tool to interact with top knowledge assets on the Paranet, supported by LunarCrush. This AI-powered tool will be accessible to users paying with BUZ tokens. All BUZ tokens spent by users will be recycled as additional rewards for knowledge mining.
In the upcoming weeks, a comprehensive proposal for the Social Intelligence Paranet will be submitted to the NeuroWeb community for approval. The proposal will include:
- Knowledge Assets created from LunarCrush APIs
- An incentives model for knowledge miners targeting the first category of knowledge
- A demo of the LunarCrush AI tool
Advancing the Wisdom of the CrowdsThe traditional wisdom of the crowds concept eliminates idiosyncratic noise associated with individual judgment by averaging a large number of responses. Social Intelligence takes this concept further by unlocking actionable information through high-quality, curated knowledge enhanced with specific domain expertise. The rise of AI introduces the potential for another leap forward in extracting wisdom from a vast body of knowledge. Incentivized crowdsourcing to collect superior social content provides an ideal foundation for AI services to uncover wisdom that is not immediately apparent. While a conversational tool is the initial step, subsequent developments will include AI agents performing comprehensive tasks such as market analysis and prediction market suggestions. As the Social Intelligence Paranet expands beyond the crypto field, it promises to support enhanced decision-making powered by the wisdom of the crowds across various topics.
Growing the Buz Economy: Announcing the Social Intelligence Paranet Launch was originally published in OriginTrail on Medium, where people are continuing the conversation by highlighting and responding to this story.
Sean Miller, RSA
AbstractEnterprises should consider using passkeys, especially if they are currently relying on passwords. By replacing these credentials with passkeys, enterprises will immediately reduce the risk of phishing and eliminate credential reuse, improving authentication service security. Different types of FIDO authenticators may be used to meet users’ needs with a balance between convenience and security. For enterprises that require high levels of identity assurance, internal security policies, or regulatory requirements, additional scrutiny is needed to determine the appropriate type of passkey. It is important to look at both the enterprise as a whole, as well as parts of the organization because high assurance requirements may not apply to the entire enterprise.
For many high assurance scenarios, attested device-bound passkeys may be more desirable. Relying parties with high assurance requirements will need to decide whether to accept all types of authenticators and adapt their authentication flow based on the attestation characteristics or reject registrations from unattested or unacceptable authenticators at the risk of a poor user experience.
AudienceThis white paper is intended for IT administrators and enterprise security architects who are considering deploying FIDO authentication across their enterprises and defining life cycle management policies. This paper provides an overview of the different use cases for multi-factor authentication MFA and the FIDO Authenticator choices available to administrators. The intent is to help guide administrators in choosing the right authenticator types for their specific environment. Companies requiring higher levels of security, such as those involved in healthcare, government organizations, or financial institutions that have a hard requirement around the control of the credential, in particular should read this white paper.
It is assumed that the reader has an understanding of FIDO architecture, relying parties, protocols, and has read “FIDO EDWG 2023 Papers – Introduction” that introduces key concepts used in this white paper.
1. IntroductionThis document focuses on deploying passkeys for enterprise users in high assurance environments.
Readers can find an introduction to the series of papers here. The introductory whitepaper provides additional descriptions and links to all papers in the series, which cover an array of use cases from low to high assurance. Most enterprises will likely have use cases that span more than one of these papers, and readers are encouraged to review the white papers relevant to their deployment environment.
This white paper examines what it means to be in a high assurance environment and how that may influence how FIDO is used. More specifically, the document addresses the challenges with password-only authentication and proposes passkeys as a stronger, phishing-resistant alternative to using passwords to authenticate users. Additionally, the document provides some adoption considerations for IT and security professionals to consider to ensure compliance with regulatory and security requirements for high assurance authentication scenarios. This white paper examines the use cases of registering a device, using a registered device, and dealing with recovering a lost device.
A key part in deciding if a passkey should be allowed in an environment is based on attestations. Attestations can be provided for credentials as part of the registration process, which relying parties can trust as provenance of the authenticator being used. For high assurance enterprise scenarios, attestations should always be requested. What can be discovered from the attestation associated with the credential, or the absence of any attestation, can help drive policy decisions about whether to accept the registration. Without any attestation, it may be difficult for the relying party to decide if the credential should be allowed. They may reject the registration outright, making for a poor user experience, or the enterprise may choose to employ additional, conditional multi-factor authentication (MFA) along with FIDO authentication to meet the high assurance requirements. With an attestation, the enterprise has assurances about the provenance, manufacture type, certifications, and features of the authenticator and often can rely on these assurances as MFA devices, providing multiple factors like credentials and a PIN to unlock the authenticator.
Synced passkeys work well in many use cases and can still work for some high assurance scenarios, depending on the security or regulatory requirements of the enterprise. Synced passkeys are attractive because of their recoverability and ease of use; however, they also change where credentials reside and who controls them. Given this external control of the credentials, some additional MFA may be desired for synced passkeys where the enterprise has control of the lifecycle management of the MFA method.
The remainder of this white paper will examine enterprises or organizations that have high assurance requirements based on Authenticator Assurance Levels [7] and FIDO Certified Authenticator Levels [8] to operate.
download the white paper 2. Passkey Use CasesThis section will focus on use cases around passkeys in an enterprise or an organization. There are many use cases for enterprises where synced passkeys work very well for ease and convenience in registering devices, using devices, and recovering lost devices since the credentials are available on other devices. It is highly recommended that organizations look at all the benefits of synced passkeys to determine if they are appropriate for the organization. However, the use of synced passkeys, while convenient, may not meet all the security requirements for an enterprise or organization needing high assurance (e.g., AAL3 requirements). AAL3 level has several requirements with the most significant being the use of a hardware-based authenticator. Please refer to NIST for more detail on the different levels of Authenticator Assurance Levels (AAL) [7]. Quite often, AAL3 applies to companies and organizations requiring higher levels of security, such as those involved in healthcare, government, or finance, which have a hard requirement around the control of the credential, specifically, that it is device-bound and never copied.
2.1. Registration
The enterprise or organization should first consider what device(s) they will support in their environment and how they will manage the provisioning of devices. For example, an organization may support an environment where users can bring their own device (e.g., mobile phone), or an organization may have very strict requirements around issued devices that meet specific security requirements such as PIN length, particular user presence features, or even specific hardware models. Finally, organizations need to consider whether they will allow passkeys to reside on multiple devices or just a single device. This has both security and recovery implications that need to be considered.
Organizations may have use cases that require credentials to be device-bound and not copyable at all, in which case synced passkeys are not recommended. Organizations may choose to allow synced passkeys alongside traditional MFA mechanisms, replacing the password with a passkey. However, if the organization has strict requirements for where the credentials can reside, they should look closely at restricting use to device-bound passkeys. These factors will decide how organizations manage registration. All these cases put some added burden on the relying party if types of passkeys need to be restricted.
The relying party may need to check if some requirements are met during the registration process, such as requiring an authenticator that meets or exceeds the FIDO L1+ certification [8]. To assess the authenticator’s compliance with these requirements, the authenticator must provide an attestation that can be validated and examined. If an authenticator does not meet the requirements of L1+ then, the relying party may be forced to reject the registration since nothing can be proven about the provenance of the credential, or the party may consider an implementation with additional MFA to meet the requirements of high assurance.
If an attestation is provided, the relying party can check what type of device it is and if it meets the requirements of the enterprise or organization. The relying party may also want to restrict based on the unique identifier for the authenticator, provided an attestation is available. The unique identifier, known as an Authenticator Attestation Globally Unique Identifier (AAGUID), can be used to look up the details against the FIDO Alliance Metadata Service [2] to understand what type of device is being registered, the certification level, and what features it provides.
Enterprise Attestation is another form of attestation that can be leveraged during registration. This is implemented by some authenticator vendors to add additional information that is unique to the organization. Including this additional information as part of the attestation and narrowing allowed authenticators can be used to further enhance the registration experience.
Similarly, there may be flags about whether the credential is eligible for backup and/or if it has been backed up. These flags cannot be trusted, however, without some attestation that the device is certified. A relying party might decide to allow or deny the registration based on this information as well as other information provided at runtime.
Unfortunately, if the relying party fails the registration of a credential, it forces the user to repeat the registration process again with a different authenticator at step one. Although WebAuthn [5] does not support a preflight mechanism to identify suitable authenticators, relying parties may provide feedback to the user before registration to identify acceptable authenticators. Additional guidance can be provided after failed registration to guide the user’s choice of authenticator. This guidance should be explicit and identify why the authenticator was rejected during registration, which authenticators meet the RP’s requirements, and guidance on managing browser-mandated optionality on communicating attestations.
Relying parties should be able to be more prescriptive in describing requirements of authenticators, allowing for a much better user experience where the end user can only select authenticators that meet the requirements and remove this burden from relying parties. These changes have been proposed to WebAuthn, but they have not yet gathered the support of platform vendors.
Another approach for enterprises might be not to offer any registration use case exposed to the end user. Instead, the enterprise would manage the lifecycle of registering the devices before they are provisioned to users. Similarly, the enterprise might provide some form of supervised registration experience to ensure only authorized authenticators are provisioned and registered. This avoids a number of pitfalls with the user experience mentioned above but puts more lifecycle management burden on the enterprise.
2.2. Sign In
Once a credential has been registered, FIDO credentials can be accessed when needed at authentication. The application(s) will leverage the WebAuthn browser API or platform passkey APIs to perform a FIDO authentication using a registered device. Depending on the type of registered device, there will be multiple factors involved in the authentication, like the entering of a PIN or a user presence challenge. The requirement for these interactions is there is a high level of assurance that the user is who they say they are, and they are not impersonating any user. These requirements need to be enforced during the registration process to ensure devices are allowed to meet the requirements of the enterprise or organization.
The only difference in this use case between synced passkeys and device-bound passkeys is what needs to be authenticated. For device-bound passkeys, the original hardware device used during the registration process is needed. Synced passkeys may be accessed from multiple devices that have access to an account hosted by a passkey provider. Furthermore, some synced passkeys may be shared after registration. Relying parties do not have a mechanism for identifying shared credentials in the current specifications, making it harder to understand and manage the lifecycle of synced passkeys.
There are several enterprise use cases covered in the white paper on “Choosing FIDO Authenticators for Enterprise Use Cases” [4]. Organizations should review these to evaluate how FIDO is leveraged. In particular, an organization planning to rely on FIDO as a first factor (passwordless) or a second factor is a key decision, and the white paper may help organizations understand what truly requires high assurance. For example, there may be a specific project, or a use case may apply to an entire industry driven by government or regulatory requirements. Employees might be allowed to use a synced passkey to access a laptop for example, but then need to use a device-bound passkey to sign in to a specific application restricted to certain employees with a particular clearance level.
2.3. Recovery/Lost Device
Recovery is where a synced passkey shines. If one loses a FIDO device that holds a credential, they can just access the credential from a different device that shares the same platform account. This is convenient, but also means that a passkey is only as secure as the platform account with which it is associated. Enterprises should examine the vendor solutions to understand how secure it is before relying on a service external to the organization. For example, does it provide end-to-end encryption with keys that are not known to the vendor? What additional measures like MFA are used to secure the user’s account? What process is used for account recovery? End users may not be concerned about such matters, but these details may represent a security concern for the organization’s security administrators. The organization’s security requirements need to be examined to see if an external party can store and manage credentials. Furthermore, without requiring attestations, the relying party has no idea who or what is the issuer of a credential—whether it be the platform, a roaming authenticator, a browser plug-in, or something else. As a result, the relying party cannot provide any guidance as to how to recover access to the credentials while providing high assurance. An alternative form of account recovery external to recovering the FIDO credential would be needed to verify the identity of the user and issue a new device and credentials. Finally, the recovery of a passkey from a provider when using synced is not known to the relying party. This represents a potential attack that the enterprise is unaware of.
For device-bound passkeys, the recovery process is more involved and will likely require the involvement of a help desk [6] to issue a new device and possibly revoke access for the old device. This is a security-first approach over convenience that allows an enterprise or organization to control who has devices. It does mean there are additional steps needed for the end user before they can regain access. However, this gives enterprises more control over the lifecycle of the credentials, allowing enterprises to revoke or expire authenticators at any point and be able to guarantee that credentials are not copied or do not exist outside enterprise controls. Some enterprises have solved this by provisioning multiple devices so users can self-recover. Ultimately, there is a business decision to be made regarding recovery models. In some cases, it may be appropriate to block access until the user can receive a new device, taking loss of productivity over a lower security model. The extra burden highlighted in the registration step if an enterprise chooses to manage the registration experience has a direct impact on the recovery/replacement experience.
2.4. Unregistering
At some point an employee will either leave a project or the enterprise overall. The enterprise will want to be sure they have control over credentials and unregister their use so access is no longer possible. This is a bigger consideration when it comes to synced passkeys where the enterprise does not have full control of the lifecycle and management of the credentials. If synced passkeys require additional MFA, the enterprise can control the MFA aspect, expiring the factors involved so authentications no longer are allowed. Device-bound passkey environments have much more control over unregistering devices, either by physically handing in a device and knowing no copies were made, or invalidating/expiring the device so subsequent authentication attempts fail.
The credential lifecycle requires the ability to disable or remove a credential, whether due to a change in status of an employee, such as a leave of absence or separation from the organization, or due to the potential loss or compromise of a credential. Passkeys differ from passwords in these instances since the user may have multiple passkeys registered with the relying party, as opposed to passwords, where the user is expected to only have one password per relying party. In the case of a permanent separation between the user and enterprise, disabling the user account and/or rotating the credential in the service is standard practice to ensure the user is no longer able to authenticate. If the separation is temporary, such as for leave of absence, enterprises may choose to rotate all the user’s credentials or disable the user account until the user returns.
In the case of credential loss, the next steps are dependent upon the deployment scenario. Users with device-bound passkeys who lose their security key should have the credential revoked by the service. Synced passkeys create additional challenges. If the device has been compromised, all credentials resident on the device, including those resident in different passkey providers, should be treated as compromised and revoked by the RP. If the user’s passkey provider account has been compromised, the impacted credential(s) stored with the provider must be revoked. To facilitate revocation in these scenarios, RPs should allow credentials to be named or otherwise identified by the user during registration to facilitate the revocation of specific credentials where possible. Administrative controls must narrow their focus on eliminating credentials from the RP rather than removing the credential private key material from either hardware security keys or a passkey provider’s sync fabric, which may not be possible.
3. Deployment StrategyIn a high assurance environment, the enterprise is likely going to want to manage the distribution and retirement of all authenticators. Device-bound passkeys would be managed by IT and provisioned to individuals. Relying parties would need to check for attestations and only allow the registration of authenticators that are managed by the enterprise or organization. If attestations are absent or do not meet the security requirements, the registration should fail. Processes should be established to manage the pool of authenticators to ensure they are retired when individuals leave or no longer require high-level access. Lastly, the organization or enterprise should define what the process looks like for recovering lost/stolen devices. Depending on how critical the access is to the continuity of the business, multiple hardware devices might be issued for a given individual to ensure they always have access.
4. ConclusionThere is no argument that passkeys are a strong phishing-resistant alternative option to traditional passwords. In an enterprise environment, it is important to look at security and regulatory requirements to determine if synced passkeys work, or if there are stricter constraints such as internal security policies, regulatory, or compliance requirements that require the use of device-bound passkeys. With either approach, enterprises should spend the time to understand how registration, management, and recovery of FIDO credentials will be managed. This includes important use cases like storage of credentials (external), recovery of lost credentials, and unregistering devices when employees leave. Based on the requirements of the enterprise, passkeys may work without any customizations, or enterprises may need to invest to ensure their authentication experience is more managed and filtered to specific devices.
5. Next Steps: Get Started Today Use FIDO standards. Think about what your relying parties are supporting and consider your enterprise security requirements. Passkeys are far more secure than passwords. Look for the passkey icon on websites and applications that support it.For more information about passkeys, visit the FIDO Alliance site [3].
6. References[1] FIDO Deploying Passkeys in the Enterprise – Introduction
[2] FIDO Alliance Metadata Service – https://fidoalliance.org/metadata/
[3] Passkeys (Passkey Authentication) –
https://fidoalliance.org/passkeys/#:~:text=Can%20FIDO%20Security%20Keys%20support,discoverable%20credentials%20with%20user%20verification.
[4] FIDO Alliance White Paper: Choosing FIDO Authenticators for Enterprise Use Cases –
https://fidoalliance.org/white-paper-choosing-fido-authenticators-for-enterprise-use-cases/
[5] WebAuthn – https://fidoalliance.org/fido2-2/fido2-web-authentication-webauthn/
[6] FIDO account recovery best practices –
https://media.fidoalliance.org/wp-content/uploads/2019/02/FIDO_Account_Recovery_Best_Practices-1.pdf
[7] NIST Authenticator Assurance Levels – https://pages.nist.gov/800-63-3-Implementation-Resources/63B/AAL/
[8] FIDO Certified Authenticator Levels – https://fidoalliance.org/certification/authenticator-certification-levels/
We would like to thank all FIDO Alliance members who participated in the group discussions or took the time to review this paper and provide input, specifically:
Matthew Estes, Amazon Web Services John Fontana, Yubico Rew Islam, Dashlane Dean H. Saxe, Amazon Web Services, Co-Chair FIDO Enterprise Deployment Working Group Johannes Stockmann, Okta Shane Weeden, IBM Khaled Zaky, Amazon Web Services FIDO Enterprise Deployment Group membersJerome Becquart, Axiad
Greg Brown, Axiad
Matt Estes, Amazon Web Services
The intent of this whitepaper is to provide guidance for organizations as they analyze the abilities and features of both device-bound passkeys and synced passkeys to determine how both credential types can be utilized in a moderate assurance environment. In this paper, the term “moderate assurance” refers to an environment or organization where the legal, regulatory, and security requirements are flexible enough to allow for the use of both types of credentials, using synced passkeys to replace passwords and multi-factor Authentication (MFA) for standard user accounts and device-bound passkeys for user accounts that require the highest level of protection and assurance. The paper is designed to provide a comparison of features and requirements that are supported by device-bound passkeys and synced passkeys, providing a vision of how both types of credentials can be utilized together in an organization that has moderate assurance needs.
AudienceThis white paper is one in a series of white papers intended for anyone who is considering deploying FIDO Authentication across their organization, including IT administrators, enterprise security architects, and executives.
Readers can find an introduction to the series of papers here. The introductory white paper provides additional descriptions and links to all papers in the series, covering an array of use cases from low to high assurance. We expect that most enterprises will have use cases that span more than one of these papers and encourage readers to review the white papers that are relevant to their deployment requirements.
The white paper assumes that the reader has a foundational understanding of FIDO2 credentials and the role they play in the
authentication process; introductory information on FIDO2 can be found here: FIDO2 – FIDO Alliance.
The initial implementations of FIDO2 credentials were created as device-bound passkeys on either a roaming authenticator or platform authenticator, where the private key of the credential is stored on the device’s authenticator and not allowed to be exported, copied, backed up, or synchronized from the authenticator. This configuration presents a very secure and phishing-resistant solution for authentication that gives relying parties (e.g., web sites or service providers), a very high level of confidence that the user and the device are legitimate users of the system. With this high level of assurance, however, comes some challenges – primarily regarding usability and account recovery. For example, because there is no way to get the private key off the authenticator, if the device the private key is stored on becomes lost or damaged, then access to the resources that key authenticated would be lost. With device-bound passkeys, the solution is to register a second device-bound passkey with every relying party. This creates a more difficult user experience as the user would be required to register both authenticators. This is somewhat reduced for organizations that have consolidated their authentication flow by using an identity provider (IdP) to federate access to their applications, as the relying party is then the IdP itself.
To solve these challenges, in May 2022 Apple, Google, and Microsoft announced their intent to support synced passkeys in their operating systems. Synced passkeys have many of the same characteristics of device-bound passkeys, including the continued use of private and public key pairs. One significant difference, however, is that synced passkeys allow for the private key of the credential to be synchronized to other devices the user owns that exist in the same vendor’s synchronization fabric ecosystem (e.g., iCloud in the Apple ecosystem). Synced passkeys also allow for the creation of a more streamlined and user-friendly experience. All passkeys share several common security properties, are highly phishing resistant, and use unique key pairs to enable strong authentication. However, it is also important to note the difference between synced and device-bound passkeys. For example, synced passkeys introduce new security considerations when analyzed against a device-bound passkey. Conversely, synced passkeys can more easily address account recovery challenges.
As organizations work to evaluate how and where both credential types can be utilized in their environment, they will need to review and understand their organization’s legal, regulatory, and security requirements. When organizations evaluate these requirements, they will many times refer to the combination of these requirements as an authentication assurance level (AAL) and will reference documentation from the National Institute of Standards and Technology (NIST), which provides guidance and recommendation for different assurance levels. While there is currently work underway by NIST to update these assurance levels to better incorporate synced passkeys, the current standards can be helpful when evaluating the implementation of device-bound passkeys and synced passkeys into an organization. More information regarding NIST and AALs can be found here: Authenticator Assurance Levels (nist.gov).
In terms of this white paper, a moderate assurance environment is an organization that has several different authentication use case scenarios that can be met by a combination of AAL1 and/or AAL2 as well as AAL3 levels of assurance. This white paper will dive deeper into the advantages and disadvantages of both device-bound passkeys and synced passkeys to provide a comparison between the two that an organization can use along with their own legal, regulatory, and security requirements to determine how and where they can implement both device-bound passkeys and synced passkeys into their moderate assurance environment so that they can take advantage of the secure, phishing-resistant, and user friendly authentication process that FIDO2 credentials provide in all parts of their organization.
download the white paper 2. FIDO Credential Adoption ConsiderationsWhen organizations are evaluating the use of both device-bound passkeys and synced passkeys to support the AAL1, AAL2, and AAL3 requirements of their organization, there are several factors that they should consider. These factors are described below and are intended to provide the organization with the information they need to help analyze both types of credentials and determine where they can be used in their enterprise.
2.1. User Experience
In terms of user experience, the goal of using FIDO credentials to authenticate to a system has always been to provide an easy-to-use and effortless process for the user. The original FIDO implementations provided a streamlined sign-in experience, but still presented some user experience challenges.
Passkeys introduce several enhancements to help provide improve user experience including a new feature called “passkeys Autofill UI” that provides users easier access to the creation of the passkeys and provides an autofill-like experience where users simply pick the credential they want to use when authenticating and no longer type in their username or password. This experience becomes quite easy to use and is very similar to the experience that most users already like and are comfortable with when using solutions such as password managers. Creating a passkey user experience that users like more than their current password experience removes the hurdle to adoption that has been seen with previous passkey implementations.
2.1.1 Backup, Lost Devices, and Recovery
With device-bound passkeys, the private key is stored on and not allowed to leave the authenticator. This creates a very secure solution but does create challenges for users and enterprises regarding backup of the key data, loss of the authenticator, and addition of new authenticators for the user. While there are recommended recovery practices for device-bound passkeys (FIDO_Account_Recovery_Best_Practices-1.pdf (fidoalliance.org)), synced passkeys work to resolve these challenges in a more user friendly manner. With the implementation of a synced passkey solution, the user no longer must register multiple authenticators with a relying party to ensure continued access in the event of a lost authenticator. If an authenticator is lost, a user can recover their passkey by using the recovery process provided by the passkey provider. Additionally, synced passkeys make for a better user experience as a user does not have to register unique credentials per device or maintain multiple device-bound passkeys to minimize the risk of credential loss. Once configured, synced passkeys are available across all devices synced with the passkey provider.
Synced passkeys do, however, create a dependency on the passkey provider and their synchronization fabric. Each provider implements their own synchronization fabric, which includes their own security controls and mechanisms to protect credentials from being misused. Organizations with specific security or compliance requirements should assess which provider(s) or hardware security keys meet their requirements.
Synced passkeys have a lower security posture as they allow the private key on the authenticator to be synchronized to authenticators of other devices the user has in the same vendor’s ecosystem. Organizations should also be aware that there currently are no standards or systems that allow them to keep track of what devices these credentials have been created and stored on, nor mechanisms to identify when the credential has been shared with another person. For use cases in an organization that require a high level of assurance, the fact that this information cannot be determined or obtained means that synced passkeys would not be a good solution for those specific organizational use cases, and they should look to device-bound passkeys to support those use cases.
2.3 Attestation and Enforcement of Credential Type
Attestation is a feature that is designed to enhance the security of the registration process. Attestation mechanisms are defined by the specifications as an optional feature, though most hardware security keys implement it. Attestation is the ability of the authenticator to provide metadata about itself back to the relying party so that the relying party can make an informed decision on whether to allow the authenticator to interact with it. This metadata includes items such as an Authenticator Attestation Globally Unique Identifier (AAGUID), which is a unique ID that represents the vendor and model of the authenticator, the type of encryption that the authenticator uses, and the PIN and biometric capabilities of the authenticator. Some authenticator vendors also support a feature called Enterprise Attestation that allows an organization to add additional uniquely identifying information in an attestation that is included with an authenticator registration request, with the intent to use this additional information to support a controlled deployment within the enterprise where the
organization wants to allow the registration of only a specific set of authenticators. Additional information about Enterprise Attestation can be found in this white paper: FIDO-White-Paper-Choosing-FIDO-Authenticators-for-Enterprise-Use-Cases-RD10-2022.03.01.pdf (fidoalliance.org).
At the time of publication, synced passkeys do not implement attestation, which means they are not an appropriate solution for scenarios with highly privileged users that require higher levels of assurance or for organizations that want to implement Enterprise Attestation. To support these highly privileged users, relying parties and organizations have historically looked to, and will need to continue to look to, device-bound passkeys and authenticators from vendors that support and include attestation in their solutions. For organizations that have regulatory, legal, or security requirements that require all users to be treated as high privilege users or have a need to implement Enterprise Attestation, it is recommended that only device-bound passkeys be implemented in their environment. A companion white paper, “High Assurance Enterprise Authentication,” provides details on this scenario and can be found here: https://media.fidoalliance.org/wp-content/uploads/2023/06/FIDO-EDWG-Spring-2023_Paper-5_High-Assurance-EnterpriseFINAL5.docx-1.pdf. Moderate assurance organizations can support all their users by implementing synced passkeys for their standard users to replace passwords and MFA with a more secure solution and then use device-bound passkeys for highly privileged users and their access to resources that require the highest level of assurance.
Implementing both types of passkeys in the same authentication domain does however create an additional challenge that will require organizations to take additional steps to ensure that the correct type of passkey is used when accessing resources: for example, ensuring that a highly privileged user is using a device-bound passkey and not a synced passkey when accessing a resource that requires a high level of assurance. Organizations can leverage the user risk evaluation and policy engine framework of their Identity Provider to solve this challenge. Watermarking the user’s session with an identifier representing the AAL (or other properties of their choosing) to be used in downstream authorization decisions can also be used to solve this challenge. In federated authentication environments, this may be communicated using standards such as the Authentication Method Reference (amr, RFC8176) standardized by OpenID Connect.
3. ConclusionIn moderate assurance environments, both device-bound passkeys and synced passkeys may be implemented together to provide a more secure authentication solution for all use cases of the organization. The more user-friendly synced passkeys can be implemented to replace passwords and MFA for users with standard assurance level requirements, giving them a more secure authentication method that is also easier to use. For highly privileged users in the organization that require the highest level of security, device-bound passkeys can be issued that provide an even higher level of security and an additional level of trust in the authentication process. The white paper provides information comparing synced passkeys, with their better user experience, against device-bound passkeys, with their enhanced security features. Using this information, organizations can evaluate device-bound passkeys and synced passkeys to determine how both can be leveraged in their organization to provide easy-to-use and secure authentication methods that meet and exceed the requirements of their moderate assurance environment.
4. Next StepsThe next step for organizations is to start the evaluation of FIDO2 credentials so that organizations can move away from passwords, which are susceptible to phishing and are well documented to be a significant weakness in their overall security posture. Organizations that have a moderate assurance need and will implement both device-bound passkeys and synced passkeys should determine which credential type will provide the best return on investment, work towards implementing that credential type first, and then follow up by completing the deployment of the other credential type when possible. Implementing either type of FIDO2 credential is a large step forward in moving to a passwordless environment and significantly increasing the overall security posture of the organization.
5. AcknowledgementsWe would like to thank all FIDO Alliance members who participated in the group discussions or took the time to review this paper and provide input, specifically:
Karen Larson, Axiad Jeff Kraemer, Axiad Dean H. Saxe, Amazon Web Services, Co-Chair FIDO Alliance Enterprise Deployment Working Group Tom Sheffield, Target Corporation FIDO Enterprise Deployment Working Group MembersIn Episode 3, we talk to Justin Sherman, founder of Global Cyber Strategies and adjunct professor at Duke University’s Sanford School of Public Policy. We talk about data brokers, identity resolution and customer data platforms–the OG data suppliers, and whether or not the selling of personal information is safe for humankind.
The post “Unsafe at Any Click” – Episode 3 appeared first on Internet Safety Labs.
Khaled Zaky, Amazon Web Services
AbstractThis white paper describes the need for a more secure and convenient solution for authentication. Passwords have long been the standard for authentication, but the risks inherent to passwords reduce their efficacy as an authentication mechanism. Multi-factor authentication (MFA) solutions have been on market for some time, but their widespread adoption has been slow due to various barriers. Passkeys are an authentication solution that reduces the adoption barriers of traditional MFA mechanisms, while offering improved security, ease of use, and scalability over passwords and classic MFA solutions. Passkeys utilize on-device biometrics or PINs for authentication and provide a seamless user experience. This white paper outlines the benefits of passkeys, the user experience, and adoption considerations for enterprises.
1. IntroductionPasswords have long been the standard for authentication, but their inherent security flaws make them exploitable. Many passwords can be easily guessed or obtained through data breaches, and the reuse of passwords across multiple accounts only exacerbates the problem. This vulnerability makes them susceptible to credential stuffing attacks, which use leaked or commonly used passwords to gain unauthorized access to user accounts. In fact, passwords are the root cause of over 80% of data breaches, with up to 51% of passwords being reused. Despite these security concerns, many consumers and organizations continue to rely solely on passwords for authentication. According to a recent research by the FIDO Alliance, 59% of consumers use only a password for their work computer or account.
Traditional multi-factor (MFA) mechanisms, such as one time passwords (OTPs) delivered via SMS, email, or an authenticator app, are used by organizations to reduce the risk associated with a single-factor, password-based authentication system. Organizations using single-factor authentication with passwords, or those that have deployed OTPs to reduce phishing and credential stuffing, can implement passkeys as a password replacement to provide an improved user experience, less authentication friction, and improved security properties using devices that users already use—laptops, desktops, and mobile devices. For an introduction to passkeys and the terminology, please see the FIDO Alliance’s passkeys resource page. In the following pages, we will focus on migrating existing password-only use cases to passkeys. For additional use cases, please see here.
download the white paper 2. Why Are Passkeys Better than Passwords?Passkeys are a superior alternative to passwords for authentication purposes and offer improved usability over traditional MFA methods. They offer several benefits such as better user experience, reduced cost of lost credentials, phishing resistance, and protection against credential compromise.
Synced passkeys offer a consistent authentication experience for users across multiple devices. This is made possible by leveraging the operating system platform (or a third party synchronization fabric such as that from password managers) to synchronize cryptographic keys for FIDO credentials. This allows for quick and easy sign-in using biometrics or a device PIN. Synced passkeys also improve scalability and credential recovery. With synced passkeys users do not have to enroll a new FIDO credential on every device they own, ensuring that they always have access to their passkeys, regardless of whether they replace their device.
On the other hand, device-bound passkeys such as security keys can be used on multiple devices allowing for cross-device portability. Unlike synced passkeys that are accessible on any synchronized device, device-bound passkeys are tied to the specific physical security key.
In terms of security, passkeys are built on the FIDO authentication standards, providing strong resistance against the threats of phishing and credential stuffing. Additionally, passkeys rely on existing on-device security capabilities, making it easier for small and medium enterprises to adopt stronger authentication methods.
Finally, passkeys offer a comprehensive solution for secure and efficient authentication that is better than passwords and traditional MFA authentication methods. With a seamless user experience, improved scalability, and enhanced security, passkeys are a valuable solution for organizations of all sizes.
3. Passkeys User Experience3.1 Create a passkey visual UX/UI
Note: This section will provide an overview of the passkey registration and sign-in process using examples. Note The FIDO Alliance User Experience Working Group has developed UX guidelines for passkeys that are available here.
In the passkey registration flow, users are first prompted to provide an email or username along with their password to authenticate.2. Then, users simply follow the prompts to provide their on-device biometric or PIN authentication.
3.2 Sign in with a passkey visual UX/UI
To sign in with a passkey, a user just selects the email or username. Available passkeys will be shown in the passkey autofill user interface. 4. Adoption Considerations for EnterprisesWithin businesses large and small, there are systems and services dependent upon single factor authentication using passwords. We collectively refer to these use cases as “low assurance use cases.” For low assurance use cases, technology leaders can displace password-only authentication mechanisms with passkeys, dramatically reducing the risk of phishing, and eliminating password reuse and credential stuffing. However, even for low assurance use cases, businesses must consider factors that will influence their choice of technology and implementation, which we outline below.
As a prerequisite to deploying passkeys in the enterprise, leaders must clearly define the set of use cases, users, and the suitability of passkeys for this set.
4.1 Does the relying party (RP) support passkeys?
At the time of writing (Q2 2023), passkeys are a relatively new technology, and as such broad-based support is not guaranteed. As organizations review their systems to identify candidates for migration to passkeys, leaders must start by identifying where passkeys are supported within their ecosystem.
First, for in-house developed/managed applications, how can passkey support be added to the application(s)?If a single-sign on (SSO) mechanism is used to federate multiple applications and services, adding passkey support to the Identity Provider (IdP) can propagate support for passkeys to numerous federated applications, creating a rich ecosystem of services supporting passkeys with engineering efforts focused on the SSO IdP. Conversely, if the environment uses multiple independent applications, each of which uses password-based authentication, organizations will have to prioritize FIDO implementation across their suite of applications to leverage passkeys, or consider migrating to a federated authentication model where the IdP supports passkeys.
Second, third-party developed or hosted applications may or may not support passkeys. If an organization’s service provider does not support passkeys today, inquire when support is expected. Alternatively, if the organization is pursuing a federated identity model, does the service provider support inbound federation?If so, end users can authenticate to the IdP with a passkey before federating to the service providers’ systems.
4.2 Which devices are used to create, manage, and authenticate with passkeys?
After identifying a set of targeted applications or IdPs, identify the users of the applications and the devices they use to access the same. Generally speaking, users on modern operating systems, browsers, and hardware will have broad support for passkeys registered on a platform device, using a credential manager, or with a hardware security key. There are tradeoffs with each mechanism.
Today, passkey providers allow users to register passkeys that are synchronized to all of the devices the user registered with the sync fabric. Passkeys providers may be part of the operating system, browser, or a credential manager which stores and manages passkeys on behalf of the user. If the user loses or replaces their device, the passkeys can be synchronized to a new device, minimizing the impact on users. Typically, this is a good solution for users who use a small number of devices on a regular basis.
Conversely, hardware security keys create device-bound passkeys; they never leave the device. If a user loses their hardware key, they must have a backup or perform account recovery for all credentials stored on the device. Passkeys may be shared with other users if they are not hardware bound.
Hardware security keys require connectivity to the user’s computing device through USB, Bluetooth, or NFC whereas providers are always available on the user’s devices once bootstrapped. Platform credentials may be used to authenticate on nearby devices using the FIDO Cross-Device Authentication. Enterprises should consider whether users who move between a number of shared devices should synchronize passkeys across all the shared devices, use hardware keys, or use the hybrid flow to best support their work style.
When users operate on shared devices using a single account (or profile), passkeys registered to the platform or credential managers are not a good fit. Device bound passkeys on a hardware key are recommended for this scenario. If the user carries a mobile device, consider registering a passkey on the device and using the cross device authentication flow to authenticate users.
Unlike passwords, all of the passkey solutions reviewed above provide strong phishing resistance and eliminate credential theft from the RP and reuse.
4.3 Registration & Recovery
If there are no restrictions on which device(s) or platform(s) the user can register their passkeys, users may self-provision passkeys by bootstrapping a new credential from their existing password using the device(s) of the user’s choice. If using hardware security keys, organizations should provide two per user to allow for a backup credential.
As long as a password remains active on the user account, the user can recover from credential loss following the self-provisioning described above. This step is only required if the user is unable to restore their credentials from their passkey provider.
5. ConclusionPasskeys offer a significant improvement in security compared to traditional passwords, but it is important to carefully evaluate and understand the adoption considerations before proceeding with an implementation. Organizations should ensure its technical requirements, security, and management preferences align with the passkey solution. Not all use cases are suitable for a passkey-only implementation. For additional deployment patterns, see the other white papers in this series here.
6. Next Steps: Get Started TodayOrganizations should upgrade their authentication method and take advantage of the stronger security that passkeys provide. Based on the FIDO authentication standards, passkeys offer a robust solution to the growing threat of phishing attacks. Look for the passkey icon on websites and applications that support it, and take the first step towards a more secure future. Don’t wait. Make the switch to passkeys today!
For more information about passkeys, visit the FIDO Alliance site.
7. AcknowledgementsWe would like to thank all FIDO Alliance members who participated in the group discussions or took the time to review this paper and provide input, specifically (in alphabetic order):
Jerome Becquart, Axiad Vittorio Bertocci, Okta Greg Brown, Axiad Tim Cappalli, Microsoft Matthew Estes, Amazon Web Services John Fontana, Yubico, Co-Chair FIDO Enterprise Deployment Working Group Rew Islam, Dashlane Jeff Kraemer, Axiad Karen Larson, Axiad Sean Miller, RSA Dean H. Saxe, Amazon Web Services, Co-Chair FIDO Enterprise Deployment Working Group Tom Sheffield, Target Corporation Johannes Stockmann, Okta Shane Weeden, IBM Monty Wiseman, Beyond Identity FIDO Enterprise Deployment Working Group MembersDean H. Saxe, Amazon Web Services, Co-Chair FIDO Enterprise Deployment Working Group
1. IntroductionLast year FIDO Alliance, Apple, Google, and Microsoft announced their intentions to support passkeys— FIDO credentials that may be backed up and made available across devices that are registered to the same passkey provider. Since then, we have seen the support for passkeys and beta implementations by multiple platforms and password managers. Enterprises have expressed interest in passkeys but do not know where to start, what type of passkeys work in their environment, or how passkeys fit in their authentication strategy.
It is important to note that FIDO Alliance has embraced the term “passkey” to describe any passwordless FIDO credential. This includes synced passkeys(consistent with the original announcement and intent) as well as device-bound passkeys – which are FIDO authentication credentials that cannot leave the issued device (e.g., on a FIDO Security Key).
In the following series of papers, the FIDO Enterprise Deployment Working Group (EDWG) will provide guidance to leaders and practitioners on deploying FIDO solutions scaling from SMBs to large enterprises. With recognition that there are a variety of different use cases for FIDO credentials, from synced passkeys to device-bound passkeys, this series will identify key decision points for identifying which solution(s) are a good fit across different enterprise use cases. Enterprises are likely to find there are multiple FIDO-based solutions required to meet their different use cases.
As organizations evaluate how to use passkeys in their environment, they will need to determine the legal, regulatory, and security requirements of their organization and evaluate how both synced passkeys and device-bound passkeys can meet these requirements.
We assume that the reader has a high level understanding of the FIDO protocols, if not, please consult https://passkeys.dev/.
download the white paper 2. Why Choose Passkeys?Passwords are the root cause of over 80% of data breaches, and up to 51% of passwords are reused, making them subject to credential stuffing attacks. FIDO credentials are inherently more secure than passwords due to their design. These credentials are unique cryptographic key pairs scoped to a specific origin (e.g., https://fidoalliance.org/) to prevent discovery by unrelated services. Unlike passwords, FIDO credentials are highly phishing resistant, and the credential—a private key—cannot be stolen from the relying party (RP) servers.
FIDO credentials can be utilized across a variety of use cases—from low to high assurance, balancing user experience, convenience, and security. Authenticators—ranging from hardware security keys to biometric hardware in phones, tablets, and laptops to password managers—enable enterprises to choose the right tools for their unique environments.
While all FIDO credentials are based on cryptographic key pairs, they do not exhibit the same security characteristics, nor are they all suitable for all use cases. For example, hardware security keys may be FIPS certified devices with device-bound passkeys. RPs can identify these credentials based upon the attestation statements provided at registration. On the other hand, synced passkey implementations synchronize key material through a cloud-based service. The export and management of credentials in a third-party service introduces additional considerations and may not meet every organization’s security requirements. The table on page 4 summarizes the use cases and properties of device-bound and synced passkeys.
As you read the series you may encounter terminology that is unique to the FIDO ecosystem. Please consult the FIDO Technical Glossary for definitions of these terms.
We expect that most enterprises will have use cases that span more than one of these papers. Wherever organizations find themselves on this journey, they can start using FIDO credentials today to reduce credential reuse, phishing, and credential stuffing.
In the first paper, we examine how organizations can deploy passkeys to their users who are using passwords as their only authentication factor. By deploying passkeys, companies can immediately reduce the risk of phishing or credential stuffing for their staff while using corporate or personal devices for authentication. https://fidoalliance.org/fido-in-the-enterprise/.
There are many organizations that have deployed classic second factor authentication solutions such as SMS OTP, TOTP, and HOTP. In many cases, these deployments were tactical responses to reduce the success of phishing attacks. However, none of these mechanisms are immune to phishing. In the second paper of the series, we examine how passkeys can displace less phishing resistant mechanisms while improving the authentication user experience. https://fidoalliance.org/fido-in-the-enterprise/.
Enterprises in regulated industries may be obligated to utilize higher assurance authentication for some, or all, of their staff. These companies (or other companies with stringent security requirements) may be able to deploy synced passkeys, device-bound passkeys, or both to meet their authentication requirements. The third paper in the series provides guidance on deciding which FIDO-based solution(s) can meet these requirements. https://fidoalliance.org/fido-in-the-enterprise/.
The final paper describes using device-bound passkeys where functional or regulatory requirements require high assurance
authentication. These scenarios use attestation data to securely validate the hardware devices used to generate and manage passkeys.
This attestation data can be used to ensure compliance with regulatory and security requirements for regulated enterprises and use cases. https://fidoalliance.org/fido-in-the-enterprise/.
Device-Bound PasskeysSynced PasskeysLow AssuranceSufficientSufficientModerate AssuranceSufficientMay Be SufficientHigh AssuranceMay Be SufficientKhaled Zaky, Amazon Web Services
Monty Wiseman, Beyond Identity
Sean Miller, RSA Security
Eric Le Saint, Visa
This document intends to provide a comprehensive understanding of attestation’s role in enhancing and advancing the digital security landscape, specifically with respect to authentication. It focuses on the core function of attestation: verifying the origin and integrity of user devices and their authentication materials. FIDO credentials are discussed with a focus on how they offer more secure alternatives than traditional password-based systems and how FIDO attestation enhances authentication security for both Relying Parties (RPs) and end-users. In this document, RPs are those entities that provide websites, applications and online services that require the need for secure user access by confirming the identity of users or other entities. FIDO Alliance’s historical journey is presented with practical analogies for understanding FIDO attestation, its enterprise-specific technical solutions, and privacy aspects involved in the attestation process.
AudienceTargeted for CISOs, security engineers, architects, and identity engineers, this white paper serves as a guide for professionals considering the adoption of FIDO within their enterprise ecosystem. Readers should possess a baseline understanding of FIDO technologies, the meaning of attestation, and have a desire to understand why and how to implement attestation.
download the white paper 1. IntroductionWhile authentication is widely understood, attestation may be less familiar to many practitioners in the information technology field. Attestation, as understood within the FIDO protocols, confirms a set of properties or characteristics of the authenticator. In the physical world, we can rely on examining an object to inspect its properties and verify its authenticity. In the interconnected digital world, physical inspection is not practical. Devices used for FIDO authentication should be carefully checked before use, especially if their source or contents are uncertain. Certain transactions, especially those related to government, healthcare, or financial institutions, demand higher assurance, and it is vital that the Relying Party (RP) confirms the authenticator’s legitimacy in these cases. To ensure that high-assurance transactions are legitimate, RPs can employ attestation to verify the authenticity and properties of the authenticator.
A note on terminology: The term “key” and “key pair” is common to several types of keys described in this paper. To alleviate this confusion the term “passkey” will always be used when referring to a key used to authenticate a user. Use of other instances of the term ‘key’ will be specific by either the context or a modifier such as Attestation Key.
In traditional password-based systems, it may be assumed that users and RPs keep passwords confidential. Because this assumption is not consistently enforced, breaches can occur. Using passkeys instead of passwords is a significant improvement, but some RPs may need more stringent policies to verify the authenticity of the authenticator and its properties.
Unlike passwords, passkeys use securely generated key material allowing access to websites and apps. Users and RPs rely on the authenticator for storage and management of this key material and therefore share the responsibility for secure handling of passkeys. All actors and components of the FIDO solution, including the authenticator, RP, and the passkey provider (when applicable), together ensure a robust security framework. This is in contrast to passwords, where the secure handling of passwords depends primarily on the user’s memory, behavior, the RP, and password managers (if used). RPs can leverage attestations to verify that passkeys are securely handled within properly implemented FIDO certified devices.
Attestation provides RPs with information about the authenticator protecting the user’s passkeys. This provides a means for the RP to enforce security policies for FIDO authentication. In the following sections, we delve deeper into the concept of attestation, its purpose, real-life scenario comparisons, and the problems attestation solves.
1.1 Real-World Analogies for FIDO Attestation
Drawing parallels with everyday security protocols offers significant insights. Both digital and physical environments demand rigorous checks and balances to validate identities and fortify trust. FIDO Attestation reflects the trust and verification processes familiar in the physical world.
To understand the pivotal role of FIDO attestation, consider its application in real-world identification and verification practices. These analogies underscore its integral function and efficacy:
Identity Document Verification: Just as individuals may produce official documents such as passports or driver’s licenses to authenticate identity, the verifier (e.g., immigration official) wants proof of the document’s authenticity and therefore checks for the relevant seals and marks. FIDO attestation provides proof of the authenticity of a user’s authenticator, offers statements for examination, and provides cryptographic signatures for verifying the authenticity of the authenticator and the statements. Gaining Trust Through Authentication: Think of moments where trust is contingent on proof of identity or authority. For example, accessing a secure facility where a guard authenticates you based on your identity documents, authorizing access to the facility. FIDO attestation fosters trust in digital environments when used to confirm the authenticator provenance and authenticity during online registration. Countering Threats and Weaknesses: In real-world scenarios, ID checks exist to counteract impersonation, forgery, and fraud. FIDO attestation identifies the origins of authenticators and assists RPs to detect registrations from devices with known vulnerabilities, thereby enabling them to ensure that users employ only secure devices. 2. Practical Implications and Use-Cases of FIDO Attestation2.1 From the Perspective of a Relying Party
Delving deeper into FIDO attestation provides invaluable insights into critical roles fortifying authentication systems:
Assured Authenticator Security and Compliance: For RPs operating in sensitive sectors, for example, finance or the public domain, there’s a heightened need to ascertain that authentication devices are secure and meet specific standards. FIDO attestation helps ensure that authenticators accessing services are not only secure, but also adhere to specific standards and regulations. Authenticator Model Specificity and Trust in FIDO Authenticator Models: FIDO attestation is tailored to distinct authenticator models, ensuring that cryptographic proofs during registrations validate said authenticator model authenticity. Beyond general trust in the attestation process, this specificity allows the RP to confirm that the passkey used in the registration request originates from a particular FIDO authenticator model. Such granularity is paramount for RPs where the details of authenticator models are crucial due to regulatory or security reasons. Verification Through Attestation Signature: As a user sets up a new account, the onboarding RP can authenticate that the “attestation signature” linked to the freshly generated passkey is indeed from a genuine authenticator model. Incident handling and Response: If a vulnerability is discovered in an authenticator, RPs checking attestations have the ability to discover which authenticators may be affected and require additional authentication factors or registration of a new credential for impacted users.2.2 From the Perspective of the End-User
Although end users may not be aware of the technical details, FIDO attestation can enhance their online security:
Enhanced Trust in Services: When using services, particularly in high-assurance sectors such as banking or government portals, users can experience increased confidence. They understand that the RP isn’t just authenticating but is also ensuring that authenticators accessing the platform adhere to specific standards. Authenticator Compliance: FIDO attestation assures RPs of authenticator compliance and security, giving users the benefit of reliable functionality of their authentication devices paired with desired RP-related services. Transparent Registration and Onboarding: The registration process is designed for seamlessness, but includes an additional step when an RP requests attestation of a FIDO authenticator. At this step, users must provide their consent to share the attestation metadata with the RP. This ensures that while backend verifications related to attestations, certification path validations, and authenticator compliance are streamlined, the user is aware of and has approved the process. 3. FIDO Attestation ExplainedIn this section we describe FIDO attestation and FIDO attestation types.
3.1 What is FIDO Attestation?
Within the FIDO authentication framework, attestation is a process for verifying the authenticity of a user’s authenticator during the authentication process. The attestation can be used in conjunction with the FIDO Alliance’s metadata service [1] to get more information about the authenticator including the model and certification level. An optional level of attestation, known as enterprise attestation, allows for further verification of specific authenticators, see section 4.5.
Note that the term ‘attestation’ might have different meanings outside of the context of FIDO. This paper discusses attestation only within the scope of the FIDO Alliance.
In FIDO registration, a key step is the creation of a user authentication passkey, which occurs regardless of whether attestation is involved. During this process, the user’s authenticator—such as a smartphone—generates a unique cryptographic key pair for each RP. The private key is securely stored within the authenticator, while the public key is shared with the RP, establishing a secure authentication framework. Additionally, during registration, the authenticator may provide an attestation, offering further assurance about the authenticator’s integrity.
In addition to generating the user’s authentication passkey, the FIDO authentication framework includes an optional attestation process. When attestation is requested, the authenticator may provide an attestation (synced passkeys do not currently provide attestations) by using an Attestation Key to sign the AAGUID (Authenticator Attestation Globally Unique ID) along with the passkey public key, creating signed evidence that establishes a trust anchor for the RP to validate that the authenticator properties meet the RP conditions through the MDS (FIDO Alliance’s Metadata Service [1], see section 3.3 for additional information). If the authenticator cannot provide an attestation, the RP can authenticate the user with the passkey, and may obtain authenticator information (e.g. AAGUID), but it may not obtain verifiable evidence that the required authenticator properties are present.
This attestation process helps protect against supply chain attacks, such as the introduction of substitute or counterfeit authenticators. By verifying the authenticity of the authenticator, the RP understands the properties of the authenticator and assesses whether it meets the expected security standards, particularly during the registration phase, to ensure the device’s legitimacy.
FIDO attestation is thus a key component of the broader security and privacy objectives of the framework. It minimizes reliance on passwords, fosters strong device authentication based on public-key cryptography, and aims to offer a standardized and interoperable approach to authentication across different platforms and devices.
3.2 Types of FIDO Attestation
There are several types of FIDO attestation which differ in how the attestation statement is signed. Note that none of these attestation types except Enterprise Attestation provide information about the specific authenticator. This is to preserve user privacy.
Self-attestation: The attestation statement is signed by the user’s passkey. This provides integrity protection for the attestation statement and provides no other assurances. Basic attestation: The attestation statement is signed by a key created by the authenticator’s manufacturer and embedded into the authenticator. This provides integrity protection of the attestation statement and proof of the authenticator’s manufacturer. For privacy purposes, this key must be duplicated across many of the same authenticator’s model (current FIDO Alliance requirement is >100,000 devices). It is not unique to a specific authenticator instance. Attestation CA (AttCA) or Anonymization CA (AnonCA): This is similar to basic attestation, except the attestation statement is signed by a TPM Attestation Key. In this case, the TPM, a hardware-based module where cryptographic operations occur and secrets are stored securely without leaving the module, has its Attestation Key’s certificate signed by a trusted authority managing the authenticator. Enterprise attestation: This is discussed in section 4.5,It should be noted that the FIDO2 Specifications work along with the WebAuthn specification [2]. The type of attestation used is determined by examining fields within the attestation object which are defined in the WebAuthn specification. Further definitions provided by the WebAuthn specification includes a number of different types of formats, for example: packed, TPM, and Android-key as well as supporting custom formats if needed.
3.3 Using AAGUID
The Authenticator Attestation GUID or simply AAGUID, uniquely identifies the authenticator’s make (manufacturer) and model. It does not uniquely identify the specific authenticator. The AAGUID is returned by the authenticator when attestation is requested by the RP and the RP may use it to determine if the authenticator’s make and model meets its policies. Among other uses, the AAGUID is the lookup value within the FIDO (MDS) [1] providing the RP detailed information about the authenticator.
The authenticator’s conveyance of the AAGUID provides no proof of its integrity or authenticity. The RP must trust the specific authenticator to provide truthful information.
This point is important to emphasize:
The AAGUID without attestation is “informational” only and does not provide any assurance of its authenticity. Attestation provides a signature providing a level of assurance (depending on the type of attestation) of the authenticator’s identity. 4. Technical SolutionsThis section describes the sequence of events and involved components that make up FIDO attestation.
4.1 Authentication vs. Attestation Keys
The use of keys and methods for user authentication from FIDO have been introduced in previous documents, but the use of keys and methods used for attestation may not be familiar.
User Authentication: This is the process where the user demonstrates possession of the correct system credentials, utilizing a passkey instead of the traditional password, which is a common application of FIDO technology. Attestation: This is the process of the authenticator using a key that is not assigned to a user, but instead assigned to the authenticator, to digitally sign a message providing proof of the message’s authenticity. The message involved is called the “attestation statement” and contains information about the authenticator. When the attestation statement is digitally signed by the authenticator’s attestation key, the RP can verify the validity of the attestation statement.In summary:
A passkey authenticates the user to an RP An attestation key signs an attestation statement to authenticate its originAs stated in section 3.3 an RP may obtain the authenticator’s make and model by simply checking the authenticator’s AAGUID against the Metadata Service to get this information. Without being digitally signed by a key trusted by the RP, the RP has no proof this information is authentic or associated with the authenticator being queried.
Note: As discussed in section 3.2, there are several attestation types. One of these, “self-attestation”, uses the User Authentication key to sign the attestation statement. This is not technically a contradiction, but a simplification provided to allow integrity protection, not authenticity, of the attestation statement.
4.2 Trust in the Attestation Key – Trust Chain
Fundamental to attestation is the RP’s trust in the Attestation Key. The Attestation Key must be generated by a trusted source and protected by the authenticator. The trusted source is typically the authenticator’s manufacturer however, in the case of “Attestation CA (AttCA) or Anonymization CA (AnonCA)”, a trusted agent or Certification Authority (CA) is asserting the authenticity of the authenticator. The public part of the Attestation Key is obtained by the RP using a trusted channel, typically the FIDO MDS [1], mentioned previously.
4.3 FIDO Attestation Sequence
Attestation uses a key pair associated with an authenticator, not a user. It is important that all authenticators of the same make and model return the same attestation statement. The format of the attestation is examined later in this section, but it is important to understand that, at a high level, the attestation provides information about the type of authenticator, and it is not specific to a single device.
The following steps (1.a or 1.b then 2.) summarize a FIDO authenticator’s attestation lifecycle:
1. Authenticator Manufacturing: There are two models for provisioning the Attestation Key: case “a” for roaming authenticators, such as smartphones or USB security keys used across multiple platforms, and case “b” for platform authenticators, which are built-in authentication mechanisms within devices like laptops or smartphones.
Note: This two-model distinction is not architecturally required by the FIDO Specification, but it is the practical implementation known today and provides a simplified explanation for the purpose of this paper. Also, the descriptions are generalizations and manufacturers may deploy different methods than described here – this is only a generalization.
Roaming Authenticator: The authenticator manufacturer generates an Attestation Keypair (AK) for a specific authenticator model. The manufacturer creates a certificate with the AK’s public key. The AK Certificate is commonly put into the MDS. This allows a RP to retrieve the AK Certificate from a trusted source, MDS, when an AAGUID is provided. The AK Certificate itself is usually signed with the authenticator’s manufacturer’s issuer key. This creates a verifiable cryptographic chain from the authenticator back to its manufacturer. Platform Authenticator: The authenticator is not shipped from its manufacturer with an attestation key that can be used for FIDO attestation. Instead, it relies on persistent keys within the platform authenticator. These keys are crucial cryptographic elements that the attestation service uses to generate a FIDO Attestation Key. The attestation service is trusted by the Relying Party to provide assurance in the platform authenticator’s integrity and compliance. The attestation service creates an attestation key that is used to sign an attestation object which asserts the properties of the authenticator. The RP must trust the attestation service in the same way it trusts the roaming authenticator’s manufacturer.2. User Provisioning with Attestation: During registration (setting up the new account), a new User Credential (a passkey) is created with a unique cryptographic key pair, and the public key is sent to the RP. The RP may optionally require an attestation. Note that the User or the authenticator may ignore the requirement for attestation. If the authenticator possesses an attestation key and it is allowed by the User, the user’s public passkey (along with the attestation statement) will be sent to the RP signed with the attestation private key. This allows the RP to verify the attestation statement which includes the User’s Public passkey for the newly created User. Therefore, providing confidence/proof that the User’s private passkey originated from a specific authenticator with known properties.
4.4 A General Description of the Attestation Lifecycle
The attestation key generally has an associated attestation certificate, which links to a trusted root certificate of the Manufacturer. Once the RP has determined the authenticity of the signed attestation statement, the RP can use the attestation statement along with the MDS to learn more about the authenticator. For example, the RP may want to understand what level of encryption is used and what type of activation secrets is leveraged (e.g., biometrics) with a certain level of accuracy, etc. In order to get details about the authenticator an AAGUID value identifying the authenticator model is sent to the RP along with the newly created public passkey. Since the AAGUID represents a specific group of authenticator instances such as specific product release with a specific characteristic, specific form factor, or enterprise branding, an RP can use this AAGUID to lookup more information about the authenticator from the MDS.
As shown in the diagram, the attestation object, if provided, will indicate the format of the attestation statement, and then include some data the RP can examine. The attestation object includes a statement that typically contains a signature as well as a certificate or similar data providing provenance information for the attestation public key. Detail of the attestation object is provided in section 9.1 of the Appendix.
RPs should first verify the signature of the attestation statement and once verified, then examine the attestation statement. Once the RP has identified the attestation statement’s format and type, the RP then reviews the contents and compares the contents against its policy.
An example attestation response resulting from a direct request to the authenticator by an RP is provided in 9.2 of the Appendix. The AAGUID provided in the attestation response can be used to obtain additional details about the authenticator from the FIDO Metadata Service.
4.5 Enterprise Attestation
By default, FIDO allows an authenticator to provide only product information using the AAGUID and high-level information about its type and capabilities, explicitly prohibiting an authenticator from providing uniquely identifying information. However, Enterprise attestation removes that limitation, as it binds a unique authenticator key pair to a serial number or equivalent unique identifier.
4.5.1 Use Cases
Enterprises actively manage authenticators for various purposes and are essential for securing high-value assets. While employees may select their own authenticators, enterprises may limit authenticators per employee and revoke them upon a departure or loss, as they oversee the entire process from purchase to collection. Additionally, enterprises may prioritize manageability and traceability to safeguard resources. Upon a threat incident, forensic investigations may need to trace activities related to a particular authenticator and correlate the authenticator’s usage activity patterns in order to discover anomalies or the source of threat. Tight management enhances their ability to ensure non-repudiation for transactions. High-risk users may be assigned dedicated authenticators from the enterprise for access to restricted sensitive information or services. These authenticators are assigned specific PINs and are acquired through trusted supply chains.
Certain enterprise deployments require the use of FIDO authenticators with enterprise attestation in order to identify specific device identities (e.g. device serial numbers). Enterprise Attestation validation must also be supported by the organization’s specific Relying Parties. These practices actively address enterprise-specific needs for improved control over device provisioning and lifecycle management.
4.5.2 Process
4.5.2.1 Provisioning
Provisioning for enterprise attestation, is modified from the process described in section 4.3 to include both authenticator unique information in the attestation statement and to add any specific RPs permitted to receive this unique information from any set of RPs permanently “burned” into the authenticator by the authenticator’s manufacturer. The authenticator performs enterprise attestation only to those RPs provisioned to the authenticator. Other RPs may still perform any other type of attestation that excludes the unique identifier.
Authenticators that have the enterprise attestation burned into them must not be sold on the open market and may only be supplied directly from the authenticator’s manufacturer to the RP. An RP wanting an enterprise attestation enabled authenticator will order them directly from the authenticator’s manufacturer by providing a list of RP IDs (RPIDs). These specific RPIDs are the ones permanently burned/written to the authenticator.
4.5.2.2 User Registration with Enterprise Attestation
During a FIDO user registration described in section 4.3, the RP may indicate the need for enterprise attestation. This will uniquely associate the user with the specific authenticator by providing proof of the authenticator’s unique identifier. During user registration the authenticator verifies that the requesting RP (using its RPID) is among those listed in the permanently provisioned list of RPID permitted to perform enterprise attestation. If approved, this unique identifier is added to the attestation object and signed by the Attestation Key. The RP should validate the attestation object and, optionally, the certificate link/chain used to sign the attestation object. The RP can then verify, at user registration time, that the unique identifier was indeed purchased by the enterprise and may include that verification in its records.
The implementation used by an RP to authenticate the uniquely identifying information varies by authenticator. Some authenticators may use vendor facilitated methods where the enterprise provides a list of the RP IDs to the manufacturer and those are imprinted into the authenticators. Another is where some enterprise managed platforms maintain a policy, such as an enterprise managed browser. Rather than imprinting the list of allowed RPs into the authenticator, an enterprise managed platform will make the determination if the enterprise attestation is provided to the RP based on the policy.
5. Privacy Implications and ConsiderationsWhile attestation provides a valuable assertion of trust for authenticators, privacy concerns can arise from the information shared during attestation. Some privacy considerations include:
While the attestation properties described in this paper include a broad set of privacy controls, implementers should consider these capabilities against regional and local privacy policies. Attestation enables sharing information, such as authenticator’s make and model, firmware version, or manufacturer details, with the RP. Concerns may arise regarding the potential exposure of sensitive authenticator-specific data and the subsequent tracking or profiling of users based on this information. For this very reason, an attestation batch of at least 100,000 is recommended so it is not a small pool to identify devices from. Non-enterprise attestation prevents the association of multiple passkeys within an authenticator with different RPs, thus safeguarding user privacy. For example, a person using a single authenticator may create a User Authentication passkey (passkey1) for RP 1 (RP1), then create a new User Authentication passkey (passkey2) for RP 2 (RP2). Even though the person is using the same physical authenticator for both RPs and using attestation, even if RP1 and RP2 collaborate, they cannot determine that passkey1 and passkey2 are from the same authenticator, therefore, they cannot determine the transactions are from the same person. Enterprise attestation adds uniquely identifying information (e.g., a device serial number) allowing an authorized RP to track the use of a specific authenticator across several pre-provisioned RPs within the enterprise. It is expected that users in this environment have an understanding of this property and the value it adds to the enterprise. 6. Adoption and Deployment ConsiderationsRPs can determine the registration requirements for a FIDO authenticator, as reflected in their preference for attestation conveyance. Some RPs may not require attestations to decide if registration is allowed. Other RPs may have security requirements that require an attestation object in order to make risk decisions. Security requirements may be based on characteristics of the authenticator (e.g., whether it requires a PIN) or could be as specific as the model of authenticator(s) allowed. Finally, in more protected environments, some RPs may require additional enterprise attestations to ensure an authenticator is known, controlled, and trusted by the enterprise.
7. ConclusionFIDO attestation, a component of the FIDO and WebAuthn standards, validates the authenticity of a user’s authenticator. This process provides a defense against various threats such as supply chain attacks, counterfeit authenticators, and substitution attacks. For RPs requiring higher authentication assurance, attestation is a FIDO-centric mechanism to obtain that assurance. For RPs that need to ensure the authenticity of specific authenticators, attestation provides these RPs assurance that they are dealing with a known and trusted device.
By generating unique key pairs for each RP that a user registers with, FIDO underscores its commitment to user security, eliminating potential cross-service vulnerabilities. The enterprise attestation feature provides organizations with better management of authenticators used by their personnel and is vital to environments that prioritize precise device management.
FIDO attestation brings certain privacy considerations. Disclosing authenticator-specific information, user device fingerprinting and the potential for user tracking, all highlight the importance of a privacy-aware approach. All stakeholders, including RPs, manufacturers, and users, must navigate the path between enhancing security and preserving user privacy.
FIDO attestation is adaptable. RPs have the discretion to request their desired level of attestation, ensuring a tailored approach suitable for both specialized services and large enterprises.
In summary, FIDO attestation augments online authentication. With a focus on public-key cryptography, unique key pairs, and specific attestation processes, its efficacy is maximized through careful deployment, thorough understanding of its capabilities, and a consistent commitment to user privacy.
8. AcknowledgmentsThe authors acknowledge the following people (in alphabetic order) for their valuable feedback and comments:
FIDO Enterprise Deployment Working Group Members Dean H. Saxe, Amazon, Co-Chair Enterprise Deployment Working Group Jerome Becquart, Axiad IDS, Inc. Johannes Stockmann, Okta Inc. Tom De Wasch, OneSpan North America Inc. Tom Sheffield, Target Corporation John Fontana, Yubico 9. Appendix9.1 Attestation Object
Appendix Figure 1 – Attestation object*
*layout illustrating the included authenticator data (containing attested credential data) and the attestation statement.
9.2 Example Attestation Object
attestationObject: {
"fmt": "packed",
"attStmt": {
"alg": -7,
"sig": "3045022100da2710ff0b5f5e5d72cda8c1e650f0b696e304942e55138672aa87a5e370a92d02205fd1a48bbda4757aac21252c7064f21130aba083151ab8ae75a26a356b675495",
"x5c": [
"3082026f30820213a003020102020404ae6da1300c06082a8648ce3d04030205003077310b3009060355040613025553310b3009060355040813024d413110300e06035504071307426564666f726431193017060355040a1310525341205365637572697479204c4c4331133011060355040b130a4f7065726174696f6e733119301706035504031310525341204649444f20434120526f6f743020170d3232303632333034323132315a180f32303532303632323034323132315a30818c310b3009060355040613025553310b3009060355040813024d413110300e06035504071307426564666f726431193017060355040a1310525341205365637572697479204c4c4331223020060355040b131941757468656e74696361746f72204174746573746174696f6e311f301d06035504031316525341204453313030204649444f20426174636820343059301306072a8648ce3d020106082a8648ce3d0301070342000465f2b3189a6dd2f7df9de784c1c8fd00ae804ac8de7bea042d00563dcd5d7a40948ae59d9dcf8722d8b6025ba98fbb80e6698bbe5003e4db4d80c4a50a3348e4a37330713021060b2b0601040182e51c010104041204107e3f3d3035574442bdae139312178b39301f0603551d23041830168014b851a38b84da69c9fd5b467c1f8e374ac0433419300c0603551d130101ff04023000301d0603551d0e041604142806df6c60b1656a78f97a28e168e5ec8d2937b4300c06082a8648ce3d0403020500034800304502210088122ea59cca8480ed57a0a60a2e203302b4d93713f837be7acc3a2c895c6251022010f67d709ea2dc04ca63aec8d341dc9e562909dcea3f2a4abee2bdfd21dd162d"
]
},
"authData": {
"rpIdHash": "f95bc73828ee210f9fd3bbe72d97908013b0a3759e9aea3d0ae318766cd2e1ad",
"flags": {
"userPresent": true,
"reserved1": false,
"userVerified": true,
"backupEligibility": false,
"backupState": false,
"reserved2": false,
"attestedCredentialData": true,
"extensionDataIncluded": false
},
"signCount": 4,
"attestedCredentialData": {
"aaguid": "7e3f3d30-3557-4442-bdae-139312178b39",
"credentialId": "c0a3eb62197b77edd0cd1c73bffeb068dcc2595cfdf2e4dc01478bddc9cefcf52282f95bc73828ee210f9fd3bbe72d97908013b0a3759e9aea3d0ae318766cd2e1ad04000000",
"credentialPublicKey": {
"kty": "EC",
"alg": "ECDSA_w_SHA256",
"crv": "P-256",
"x": "D3Ki/INLfrmlNogo8d1lK7kBT4Fh3wPyVt/kusDAMKY=",
"y": "M11KJSPXRiBn1ZtAo1eynxvaUXqipZJYV0AT0gC2czo="
}
}
}
},
Appendix Figure 2 – Example Attestation object
10. References[1] FIDO Alliance Metadata Service – https://fidoalliance.org/metadata/
[2] WebAuthn Specification – Attestation Section – https://www.w3.org/TR/webauthn-3/#sctn-attestation
The cover page of the Weekend Review section of The Wall Street Journal, July 20, 2012
On July 9, 2012, not long after The Intention Economy came out, I got word from Gary Rosen of The Wall Street Journal that the paper’s publisher, Robert Thomson, loved the book and wanted “an excerpt/adaptation” from the book for the cover story of the WSJ’s Weekend Review section. The image above is the whole cover of that section, which appeared later that month.
In the article I described a new way to shop:
An “intentcast” goes out to the marketplace, revealing only what’s required to attract offers. No personal information is revealed, except to vendors with whom you already have a trusted relationship.
I also said that this form of shopping—
…can be made possible only by the full empowerment of individuals—that is, by making them both independent of controlling organizations and better able to engage with them. Work toward these goals is going on today, inside a new field called VRM, for vendor relationship management. VRM works on the demand side of the marketplace: for you, the customer, rather than for sellers and third parties on the supply side.
The scenario I described was set ten years out: in 2022, a future now two years in the past. In the meantime, many approaches to intentcasting have come and gone. The ones that have stayed are Craigslist, Facebook Marketplace, Instacart, TaskRabbit, Thumbtack, and a few others. (Thumbtack participated in the early days of ProjectVRM.) We include them in our list of intentcasting services because they model at least some of what we’d like intentcasting to be. What they don’t model is the full empowerment of individuals as independent actors: ones whose intentions can scale across whole markets and many sellers:
Scale gives the customer single ways to deal with many companies. For example, she should be able to change her address or last name with every company she deals with in one move—or to send an intention-to-buy “intentcast” to a whole market.
Should we call the sum of it “i-commerce“? Just a thought.
Back to the Wall Street Journal article. It is clear to me now that The Customer as a God would have been a much better title for my book than The Intention Economy, which needs explaining and sounds too much like The Attention Economy, which was the title of the book that came out ten years earlier. (I’ve met people who have read that one and thought it was mine—or worse, called my book “The Attention Economy” and sent readers to the wrong one.)
Of course, calling customers gods is hyperbole: exaggeration for effect. VRM has always been about customers coming to companies as equals. The “revolution in personal empowerment” in the subhead of “The Customer as a God” is about equality, not supremacy. For more on that, see the eleven posts before this one that mention the R-button:
That symbol (or pair of symbols) is about two parties who attract each other (like two magnets) and engage as equals. It’s a symbol that only makes full sense in open markets where free customers prove more valuable than captive ones. Not markets where customers are mere “targets” to “acquire,” “capture,” “manage,” “control” or “lock in” as if they were slaves or cattle.
The stage of Internet growth called Web 2.0 was all about those forms of capture, control, and coerced dependency. We’re still in it. (What’s being called Web3 is, while “decentralized” (note: not distributed), it is also based on tokens and blockchain. ) Investment in customer independence rounds to nil.
And that’s probably the biggest reason intentcasting as we imagined it in the first place has not taken off. It is very hard, inside industrial-age business norms (which we still have) to see customers as equals, or as human beings who should be equipped to lead in the dance between buyers and sellers, or demand and supply, in truly open marketplaces. It’s still easier to see us as mere consumers (which Jerry Michalski calls “gullets with wallets and eyeballs”).
So, where is there hope?
How about AI? It’s at the late end of its craze stage, but still here to stay, and hot as ever:
Can AI provide the “revolution in personal empowerment” we’ve been looking for here since 2006? Can it prove our thesis—that free customers are more valuable than captive ones—to themselves and to the marketplace?
If it is, then the market is a greenfield.
Some of us here are working at putting AI on both sides of intentcasting ceremonies. If you have, or know about, one or more of those approaches (or any intentcasting approaches), please share what you know, or what you’re got, in the comments below. And come to VRM Day on October 28. I’ll be putting up the invite for that shortly.
In the years since passkeys were first announced, a lot has changed in their availability to consumers, nomenclature across platforms, and even implementation requirements. However, one thing that has yet to change is the need for more awareness on what passkeys are, how they work, and their benefits.
In this webinar we debunk common misconceptions associated with passkeys, which we’ve heard from customers, FIDO members and industry participants, and see pop up across social networks. By doing so, we’re confident we can help drive our industry towards a passwordless world.
Authentication is a complicated problem with ever-creeping scope. Passkeys provide phishing-resistance at the point of authentication, but you need protection at enrollment and during the authenticated session thereafter, too, to truly fortify the authentication process against evolving threats.
In this discussion, authentication experts walk through all of the components of a user authentication workflow, highlighting areas of innovation and future steps for securing enrollment, authentication, and sessions.
We have a bonus Sponsor Spotlight episode of the Identity at the Center podcast for you this week sponsored by Semperis.
Jim McDonald hosts Eric Woodruff, Senior Security Researcher at Semperis, to discuss the company's approach to identity security. They delve into Semperis' tools like Purple Knight and Forest Druid, focusing on their capabilities in detecting and mitigating Active Directory and Entra ID vulnerabilities. The conversation covers the critical role of prevention and response in ITDR, the impact of ransomware on Enterprise ID infrastructures, and the importance of ensuring a trusted state in Active Directory.
You can watch it on YouTube at https://youtu.be/UwIP0hQmv00?si=BBYvcVbO9cZqET-Q
More at idacpodcast.com
Today marks 300 episodes of the Identity at the Center podcast. We celebrated by doing what we do best - talking about IAM! We took the opportunity to answer a couple of listener questions including “what is identity at the center” and whether using SSN to validate caller identities is a good idea (it’s not).
You can watch it here: https://www.youtube.com/watch?v=VXxBIG2UI8s
The website: idacpodcast.com
We’re thrilled to announce that Energy Web’s AutoGreenCharge mobile app has completed development and will soon be released on the Apple App Store and Google Play Store, enabling electric vehicle owners to charge with 100% renewable electricity.
AutoGreenCharge is a mobile app that provides unprecedented transparency and traceability for EV charging. It works by:
Connecting your EV: Our partnership with Smartcar makes it easy for data to be shared with the Autogreencharge app from a wide range of EV models Tracking Your Charging Sessions: Every time you charge your car, AutoGreenCharge collects data about your charging session. Matching Renewable Energy: the app matches your charging session with renewable energy certificates from markets around the world and creates a publicly verifiable proof that your electricity comes from clean sources. Tracking Your Green Proofs: you can easily view and verify your green proofs for each charging session within the app.In addition to the mobile app, AutoGreenCharge is also available for enterprise customers. The solution can serve as a powerful tool for EV fleets, charge point operators, and automakers looking to decarbonize charging and offer new sustainability solutions to their customers.
We’re excited to bring AutoGreenCharge to EV owners everywhere. Stay tuned for the official launch and get ready to experience the future of electric vehicle charging.
There’s still time to sign up for the beta launch on https://www.autogreencharge.com/.
About Energy WebEnergy Web is a global non-profit organization accelerating the energy transition by developing and deploying open-source decentralized technologies. Our solutions leverage blockchain to enable new market mechanisms and decentralized applications that empower energy companies, grid operators, and customers to take control of their energy futures.
Clean EV Charging at Your Fingertips was originally published in Energy Web on Medium, where people are continuing the conversation by highlighting and responding to this story.
Imagine a world where ensuring patient safety and improving healthcare outcomes begins with something as simple as smart inventory management.
In this episode, hosts Reid Jackson and Liz Sertl are joined by Chris Anderson, Director of Technical Program Management at VUEMED. Chris, with nearly a decade of experience in data management and analytics, shares the intricate world of inventory management solutions for hospitals—focusing on implantable medical devices.
Chris also discusses how a unified system not only enhances the tracking of medical devices but also bolsters patient safety through more effective recall management and improved patient outcomes.
In this episode, you’ll learn:
How unique device identification (UDI) standardization is transforming hospital inventory management, enabling more precise tracking and significantly improving patient safety outcomes.
Insights into the seamless integration of GS1 standards within healthcare supply chains and learn practical approaches to overcoming compliance pitfalls and maximizing data utility.
The emerging trends and legislative updates that are set to impact future supply chain regulations in healthcare, providing a strategic edge to stay ahead in a rapidly evolving landscape.
Connect with GS1 US:
Our website - www.gs1us.org
Connect with the guests:
Chris Anderson on LinkedIn
Kia ora,
If Tina had been in the Oceania Room at Te Papa Tongarewa last Tuesday, and for the two Digital Trust Hui Taumata that preceded it, like some of you were, I think this would have been her observation. It certainly was for another singer. The buzz in the room was palpable from the very start. Compared to the previous two years, this event saw increased attendance (214 registered, 202 in the room), more international and local speakers, and an increase in panels and roundtable discussions. It’s tangible evidence that Digital Trust is entering the consciousness of more people, accompanied by a desire to ‘get stuff done’. To that end – amongst several other great presentations, panels and exhibits of things currently in progress for those that wish to opt-in – NZTA showcased its NZTA vehicle status app, DINZ member Worldline walked us through its Digital Identity Acceptance Network at POS terminals, and DINZ member Xebo demonstrated the application of its information assurance platform for a document (quote) using verifiable credentials in the exhibition area. We are hugely grateful to the Hui’s partners, speakers and panellists, without whom this fantastic event would not have been possible.
Awareness and education play the most critical role in people’s adoption of these new services. In their absence, misinformation and disinformation fill the vacuum. A case in point is the jaw-dropping statistic from DINZ’s research published last year regarding the extent to which organisations are trusted to protect identity and use personal data responsibly. Government agencies only scored 51%. And yet, OPC’s Approved Information Sharing list is much more limited and evidentially contradicts the surveillance conspiracies that swirl around the internet.
Similarly with the opt-in Digital Identity Services Trust Framework, where it’s not widely known that it’s primarily targeted at service providers to help keep their clients’ (you and me) information private and secure from fraudsters by introducing best practices, such as adopting protected reusable verifiable credentials that you decide who gets to see – so you are not forced to hand over documents to all and sundry for copying, which carry the risk of targeted data theft.
In my closing remarks at the Hui, I asked each attendee to consider what single action they can undertake right now to improve Digital Trust in Aotearoa. And I’m asking again here. There is a private-sector-initiated awareness and education module all set to go. Will your organisation step up to its corporate social responsibility in this domain and help sponsor it? Reach out to me if you’re interested to know more.
Lastly, please take some time to listen to the first of DINZ’s new podcast series, Digital Identity in Focus here. And if you’re interested in collaborating on a brief submission on the CPD Bill, please contact me here.
Ngā mihi Colin Wallis
Executive Director, Digital Identity NZ
Read full news here: ‘Simply the Best’ | August Newsletter
SUBSCRIBE FOR MOREThe post ‘Simply the Best’ | August Newsletter appeared first on Digital Identity New Zealand.
The Human Colossus Foundation is excited to announce the publication of a groundbreaking new paper titled "Decentralised Semantics: A Semantic Engine User Perspective," authored by Carly M. Huitema, Paul Knowles, Philippe Page, and A. Michelle Edwards. This paper marks a significant advancement in how researchers search information through advanced semantic data management. The Semantic Engine developed in the agri-food sector leverages Overlays Capture Architecture (OCA) as a basis for semantic harmonisation and information discovery.
CitationHuitema, C.M., Knowles, P., Page, P. and Edwards, A.M. (2024)
Decentralised Semantics: A Semantic Engine User Perspective.
Data Science Journal, 23: 42, pp. 1–5.
DOI: https://doi.org/10.5334/dsj-2024-042
Addressing the Challenges of FAIR Data Implementation
The paper addresses a critical issue in implementing the Findable, Accessible, Interoperable, and Reusable (FAIR) data principles. While many research groups strive to make their data FAIR, they often encounter challenges documenting the context in which data was collected, processed, and analysed. This lack of machine-actionable, contextual metadata frequently renders data less reusable and visible outside the immediate research team.
To overcome these challenges, the authors present the first version of the Semantic Engine, a tool designed to facilitate the creation of decentralised, machine-actionable metadata schemas. This tool is handy when data is collected across multiple projects and institutions, such as Agri-Food Data Canada.
Leveraging Overlays Capture Architecture (OCA)
The Semantic Engine is built upon the Overlays Capture Architecture (OCA), a flexible and extensible standard hosted by the Human Colossus Foundation. OCA supports decentralised collaboration and reproducibility by allowing multiple contributors to work on different aspects of a data schema without compromising the integrity of the core data structure. This approach is particularly beneficial in the agri-food sector, where data heterogeneity and decentralised research efforts are expected.
The Semantic Engine is built upon the Overlays Capture Architecture (OCA), a flexible and extensible standard hosted by the Human Colossus Foundation. OCA supports decentralised collaboration and reproducibility by allowing multiple contributors to work on different aspects of a data schema without compromising the integrity of the core data structure. This approach is particularly beneficial in the agri-food sector, where data heterogeneity and decentralised research efforts are expected.
Applications and Future Implications
The Semantic Engine, freely accessible at semanticengine.org, allows researchers to create, edit, and manage OCA-based schemas. It has been thoroughly tested by researchers at the University of Guelph and is designed to be user-friendly for the broader research community.
The potential applications of OCA and the Semantic Engine extend beyond the agri-food sector. The paper highlights ongoing projects in Canada and Switzerland and the EU Horizon project 'NextGen,' which uses OCA to harmonise semantic data in cardiovascular personalized medicine.
The release of the "Decentralised Semantics: A Semantic Engine User Perspective" paper represents a significant step forward in making research data more FAIR and usable. By leveraging the Semantic Engine and OCA, researchers can ensure that their data is well-documented, reproducible, and accessible to a broader audience. The Human Colossus Foundation is proud to support this critical work and looks forward to its continued impact on the research community.
You can access the full paper here and explore the Semantic Engine at semanticengine.org for more information.
GLEIF is pleased to have broadened its engagement and participation in Trust Over IP Foundation (ToIP) by becoming a member of the ToIP Steering Committee in March 2024, recognizing the importance of well-functioning governance to the ongoing success of the foundation. GLEIF has been a member of ToIP, as a Founding Contributor member, since May 2020.
With the verifiable Legal Entity Identifier (vLEI), GLEIF has pioneered a new form of digitized organizational identity to meet the global need for automated identification, authentication and verification of legal entities across a range of industries. By creating the vLEI, GLEIF is now answering to this urgent and unmet need of pioneering a multi-stakeholder effort to create a new global ecosystem for organizational digital identity.
The verifiable Legal Entity Identifier vLEI concept is simple: It is the secure digital counterpart of a conventional Legal Entity Identifier (LEI). In other words, it is a digitally trustworthy version of the 20-digit LEI code which is automatically verified, without the need for human intervention. The vLEI concept is also very much in-line with ToIP Technical and Governance Frameworks as detailed below.
The vLEI Trust Chain demonstrates the ability to chain the issuance of vLEI credentials as well as providing the foundation for the automated verification of vLEIs back to GLEIF which enable cryptographic verification of the identity of an organization back to its validated LEI identity.
vLEIs go further though in being able to cryptographically tie persons to organizations in the roles in which the persons are representing or engaging with these organizations. vLEI Role Credentials combine three concepts – (1) the organization’s identity, represented by the LEI, (2) a person’s identity and (3) the role that the person plays for the organization.
GLEIF works to advance digital trust standards in the neutral ToIP forum through participation in the Ecosystem Foundry Working Group, the Issuer Requirements Task Force of the Governance Stack Working Group and as a co-chair of both the ACDC/KERI Task Force and Technical Stack Working Group. It is here in which the technical specifications of the KERI Suite have been drafted and have begun the process of approval to become published ToIP standards. The KERI Suite of specifications is made up of 3 documents – the Key Event Receipt Infrastructure (KERI) specification, the Authentic Chained Data Containere (ACDC) specification and the Composable Event Streaming Representation (CESR) specification.
GLEIF also contributed to the development of the ToIP Ecosystem Governance Metamodel and companion guide. The verifiable LEI (vLEI) Ecosystem Governance Framework is based on the ToIP Governance Metamodel.
The post ToIP Welcomes GLEIF to our Steering Committee appeared first on Trust Over IP.
It’s time for another new episode of The Identity at the Center Podcast! We talked with Microsoft Product Manager Merill Fernando about the current state and future plans for Entra ID and the importance of DevOps and governance in identity management.
Watch it here: https://www.youtube.com/watch?v=szPgsyQUpQU
More info: idacpodcast.com
Carlsbad, Calif, August 14, 2024 – The FIDO Alliance has announced its agenda today for Authenticate 2024, held October 14-16, 2024, at the Omni La Costa Resort and Spa in Carlsbad, California.
Now in its fifth year, Authenticate is the only industry conference dedicated to all aspects of user authentication, and has become a ‘must attend’ cybersecurity event. This year’s event includes over 100 sessions and 125 speakers from across the globe, offering the latest innovations, expertise, and essential discussions for the digital identity industry, with an emphasis on passwordless authentication using passkeys.
Check out the Authenticate 2024 Agenda and register at https://authenticatecon.com/event/authenticate-2024-conference/.Authenticate is perfect for CISOs, security strategists, enterprise architects, UX leaders, and product and business executives at any stage of their passwordless journey. Attendees will dive into practical content on authentication and identity security. The topics explored include FIDO technology basics, achieving business results, best practices for implementation in various use cases, UX factors, and case studies from the real world — all hosted in a resort environment that fosters collaboration, networking, and community building.
The 2024 keynote speakers have extensive experience implementing passwordless solutions for workforces and consumers and represent renowned organizations such as Amazon, FIDO Alliance, Google, Microsoft, Sony, Visa, and Yubico. The conference offers four stages with dedicated content tracks tailored to match attendees’ levels of expertise, interests, and implementation stages. Additionally, attendees will be able to get to know FIDO solution providers and join networking events to connect with peers and industry experts.
The Authenticate 2024 agenda features the following 11 content-rich tracks:
Business Case and ROI for Passkeys Technical Fundamentals and Features of Passkeys IAM Fundamentals UX Fundamentals of Passkeys Identity Verification Fundamentals Passkeys for Consumers Passkeys in the Enterprise Passkeys for Government Use Cases and Policy Making Passkeys for Payments The Passwordless Vision and the Future of Passkeys Complementary Technologies and Standards Sponsoring Authenticate 2024Authenticate 2024 is accepting sponsorship applications for companies to showcase their solutions to key decision-makers and connect with potential customers. To learn more about the available on-site and virtual sponsorship options for the 2024 event, visit the Authenticate Sponsors page here. Due to the limited opportunities remaining, interested parties are encouraged to reach out to the Authenticate team soon at authenticate@fidoalliance.org.
About AuthenticateAuthenticate 2024 is the leading conference dedicated to all aspects of user authentication, with a focus on FIDO standards. Celebrating its 5th year, the event will take place October 14-16, 2024 at the Omni La Costa Resort and Spa, offering both in-person and virtual attendance options. The conference gathers global leaders working to advance stronger, phishing-resistant authentication, and provides the latest educational content, technical insights, tools, and deployment best practices.
Authenticate 2024 is hosted by the FIDO Alliance, the cross-industry consortium that provides standards, certifications, and market adoption programs to accelerate the utilization of simpler, stronger authentication innovations like passkeys. The signature sponsors for the 2024 Authenticate conference include industry leaders Cisco, Google, Microsoft, and Yubico.
Visit the Authenticate 2024 website to register now and use the early bird discount (through September 9, 2024). Follow @AuthenticateCon on X for the latest updates.
Authenticate Contact
PR Contact
We are pleased to announce that Liad Wagman has joined Internet Safety Labs as our newest Advisor. Liad is currently serving as the Dean and Professor of Economics at the Rensselaer Polytechnic Institute’s Lally School of Management.
Liad brings a wealth of experience from his previous tenure at the Illinois Institute of Technology, where he served as the Dean and Professor of Economics and a key figure in spearheading innovative STEM and lifelong learning programs at the Stuart School of Business.
Now at Rensselaer, Liad continues to influence the academic and business landscapes, emphasizing the importance of ethical practices within economics and technology. His decision to join ISL as an Advisor is driven by a shared commitment to enhancing product safety within the tech industry.
“Solutions that align incentives for ethical behavior can benefit society by enabling stakeholders to make more informed decisions, reducing uncertainty, and fostering trust,” Wagman commented, highlighting his vision for his role at ISL.
At ISL, we are excited about the perspectives and insights Liad will bring to our mission. His extensive background and forward-thinking approach will be invaluable as we continue our work to make the internet a safer and more transparent space. We extend our deepest gratitude to Liad and all our advisors, whose expertise helps propel our mission forward.
Please join us in warmly welcoming Liad Wagman to Internet Safety Labs!
The post Introducing Our Newest ISL Advisor: Dr. Liad Wagman appeared first on Internet Safety Labs.
Disentis, July 28 to August 9
Nestled amidst the majestic Swiss Alps and the picturesque Disentis Monastery, the 2024 Summer Academy of the German Studientstiftung des deutschen Volkes and Max Weber Programm was an inspiring setting for over 70 passionate students. Among them, six vibrant working groups explored pressing societal matters, ranging from sustainability to digital health. One such group took on the challenge of envisioning a secure and inclusive European Health Data Space (EHDS).
A vision for the future of health data.
With the recent legislative work of the European Parliament and Council, the European Union is taking a visionary step towards establishing the EHDS. Beyond its benefits for patients and research, the EHDS has the potential to leverage digital technologies to significantly enhance the resilience and long-term sustainability of Europe's universal healthcare systems, provide a unique economic advantage, and set global standards in privacy, individual protection, and data governance.
For a European health data space that can be adopted by everyone (including patients!), individuals must rely on a system that embeds information security by design. A working group approached the question by leveraging the participants' diverse perspectives as stakeholders in a European Health Data Space.
Under the leadership of Philippe Page from the Human Colossus Foundation's Research Council, an international team of eleven students from diverse disciplines embarked on the mission to address key aspects of the EHDS. Their goal was to create a safe, accessible, and economically viable solution that would benefit patients and researchers alike.
Three guiding questions anchored their deliberations:
How might EHDS revolutionize medical research pathways via expanded data accessibility?
What measures could ensure the EHDS contributes to the EU economy without compromising health data privacy from commercial exploitation?
Which components constitute a secure, scalable infrastructure that meets the EHDS expectations and security demands? Would such an implementation prove sustainable?
The working group organized itself into three distinct focus groups, each addressing specific themes related to the broader topic. These subgroups operated autonomously throughout the sessions, diving into their subjects. Daily, the working group would collaborate in a collective session to share insights on primary/secondary data usage, discuss findings, and harmonize perspectives to manage risks in commercial usage. Through these collaborative efforts, the participants crafted a first draft of a position paper encompassing essential questions about the future development of the EHDS.
In conclusion, the Human Colossus Foundation thanks the organisers for creating space for bringing new ideas forwards in a manner respecting everyone’s perspective. The vision initiated during the retreat in Disentis Monastery is just the beginning. With plans to reconvene in 2025, the group aims to build upon its foundational ideas. Its aim is to create a safer, more inclusive European Health Data Space that sets global benchmarks in privacy, individual protection, data use in research, and data governance.
Subscribe to our newsletterOrbisDB has emerged as a premier database solution for the Ceramic Network. Building on the foundation laid by ComposeDB, OrbisDB brings significant advancements in functionality, performance, and user experience. This blog post will elaborate on the connection between Ceramic and OrbisDB, highlight OrbisDB's new features, and showcase its value to developers.
ComposeDB: The Original Building BlockComposeDB was the first database and has become an integral technology for many decentralized applications built on Ceramic, such as Passport.xyz, Zuzalu City, CharmVerse, and Lateral DeSci.
ComposeDB has been instrumental in dapp development on Ceramic because it introduces a robust, scalable, and user-friendly approach to data management. It supports structured data models, advanced queries, and the integration of decentralized identities, all while leveraging Ceramic's fast performance and high transaction capacity.
OrbisDB: A Practical Evolution3Box Labs designed Ceramic as an open network upon which an ecosystem of data-handling solutions could emerge. We launched ComposeDB in 2023 as the first database service on the network.
While ComposeDB represented the first database service offered on Ceramic and introduced many advancements for interacting with Ceramic, the need for simple onboarding, hosted nodes, SQL, and easy integrations with other services led the Orbis team to create OrbisDB.
Built initially as the Ceramic-based infrastructure for Orbis Social, OrbisDB evolved from a template implementation used by leading crypto projects such as Iggy Social, CoinEasy, Autonolas, and Gitcoin Schelling Point, into a slick set of interface services for data on Ceramic, including a UI for no code deployment, integrated hosting, support for additional languages, and a blue sea of possibilities made possible by plugins.
Key Upgrades with OrbisDB Simplified Ceramic Developer Experience: Rapid Ceramic Onboarding: OrbisDB offers a web app and SDK for storing and managing datasets on Ceramic, no-code, or CLI. Hosted nodes: OrbisDB makes Ceramic DevOps easy with an in-built hosted node service. Accelerated Customization: Extend the functionality of your database with plugins. Build plugins for other developers. Database Language Choice: SQL Queries: Using PostgreSQL as its indexing database, OrbisDB offers scalable performance and the benefits of traditional scaling methods. GraphQL: (already available on ComposeDB) and vector embeddings are both in development. Plugin Ecosystem: Optional and Versatile: Developers can easily add plugins to OrbisDB. These plugins are optional and designed to perform operations beyond the core's scope, providing additional functionality and connections to other blockchain services. Recently released plugins for Dune (link) and Base (link) make data visualization and importing on-chain data from any Base smart contract code-free and straightforward. Open source: Plugins are open source. Users can build and share plugins with other developers in the ecosystem. Do anything with plugins: Combine on-chain transactions from Base or other EVMs with verifiable data on Ceramic (e.g., enable mutable and verifiable metadata) Provide sybil-resistance and instant reputation score to all user-generated data using Passport.xyz or Verax attestations. Easily token-gate your applications via pre-defined indexing logic Resolve ENS Domain names directly from any datasets in one click Enable a single query from multiple data sources (API, on-chain, Ceramic data, etc.)Get Early Access to OrbisDB Studio
OrbisDB represents a practical evolution of databases on Ceramic, building upon the foundations laid by ComposeDB and significantly improving experience, languages, and interoperability.
Projects have already started building on OrbisDB in beta, including Index Network, Plurality, and Flashcards, for various use cases, including a blockchain event listener and storing encrypted user data and educational content.
We're excited to work with Orbis to support the future of decentralized data management. OrbisDB Studio, accessible later this year, will offer the developer experience improvements discussed above. Sign up here to get on the waitlist for early access.
Learn more about OrbisDB at useorbis.com
It’s cybersecurity season in Las Vegas and I’m inspired to write an overdue post on why I hate the phrase “cyber civil defense”. Actually, I don’t hate the phrase, I disagree with its usage. Being the literal sort I am, I have to of course start with a long look at what the three words seem to mean.
Cyber: aka technology; though perhaps a more detailed definition would include “software driven” and “internet connected” as necessary attributes.
Civil: in this context, I think it means “citizens” or just “people”.
Defense: The catch with this term is that it’s unclear what or who is being defended and by what or whom. For instance, this innocuous three-word phrase could mean any number of things:
Civilians defending “cyber” [tech] from other civilians. Civilians defending other people from civilians. Tech defending civilians from other civilians. Tech defending civilians from tech. Tech defending tech from civilians. (ew.)I could go on but my head hurts.
Civilian defense seems to imply a kind of volunteer force to defend people from cyber threats (what kinds of threats?).
Here’s where the wheels fall off this phrase: what about when the call’s coming from inside the house? Meaning, what about when the technology—as designed and with perfect integrity—is itself harmful to people? I’m not fine with using the phrase in the context or implication of defending people from risks from commercial technology itself because doing so:
– Reinforces that it’s acceptable for commercial technology (i.e. commercial products) to be a thing that civilians need to protect themselves from,
– Gaslights people into thinking that it’s somehow their responsibility to protect themselves against commercial technology that is evolving faster than the governance around it, bolstered by staggering amounts of financial resources, and whose risks are admittedly poorly understood by the makers themselves, but with just a little more elbow grease, you, dear user, can maybe be marginally less at risk.
– Smacks a bit of a military operation. I don’t want to join an army, I just want to have reasonably safe technology products.
– Also it’s a smidge paternalistic. (I can almost hear the “little lady” in there…)
The good news is we already have a phrase to describe risks of commercial products on humans. It’s called Product Safety.
But product safety is an abject failure when it comes to commercial software and software-driven technology. In the US we have a dedicated product safety commission, but their scope hasn’t been updated since 2008, and was hamstrung by budgetary contractions in the Consolidated Appropriations Act of 2019. Other agencies pick up pieces of product safety in the style of the blind men and the elephant, using their granted powers to maximum effect. The failure, however, is with the law makers. We have not updated ideas of “products” and “product safety” to keep pace with the internet age and citizens pay the price every day.
Sadly, from my research, it usually does take around 50 years after the launch of a new commercial product for US product safety laws to emerge, so we’re depressingly on time. For example, seatbelts became mandatory on January 1, 1968, sixty years after the commercial launch of the Fort Model T.
The EU recognized this gap in 2023 with their updated product safety law. As we in the US still wait for a federal privacy law, perhaps we can leapfrog ahead to a reimagined federal product safety law. Good news: we at ISL have tons of data, know-how, and tools to support this; it doesn’t have to start from scratch. But it would take extraordinary intestinal fortitude on behalf of the lawmakers to create something that meaningfully throttles the myriad risks technology foists upon us today. It would take precise regulation and a financially backed commitment to enforcement.
I won’t be holding my breath, but we are absolutely here for that moment if and when it comes. Meanwhile, in the likely event the US government continues to ignore product safety for technology, ISL will continue to champion the safety of all tech users through our maturing safety labels and research.
The post In Defense of Cyber Product Safety for Civilians (or Something) appeared first on Internet Safety Labs.