Last Update 6:34 PM March 29, 2025 (UTC)

Organizations | Identosphere Blogcatcher

Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!!

Friday, 28. March 2025

Oasis Open

Invitation to comment on TOSCA Version 2.0 before call for consent as OASIS Standard

OASIS and the Topology and Orchestration Specification for Cloud Applications (TOSCA) TC [1] are pleased to announce that TOSCA Version 1.0 CS01 is now available for public review and comment. TOSCA provides a language for describing application components and their relationships by means of a service topology, and for specifying the lifecycle management procedures for […] The post Invitation to

60 Day Public Review - ends May 27th

OASIS and the Topology and Orchestration Specification for Cloud Applications (TOSCA) TC [1] are pleased to announce that TOSCA Version 1.0 CS01 is now available for public review and comment.

TOSCA provides a language for describing application components and their relationships by means of a service topology, and for specifying the lifecycle management procedures for creation or modification of services using orchestration processes. The combination of topology and orchestration enables not only the automation of deployment but also the automation of the complete service lifecycle management. The TOSCA specification promotes a model-driven approach, whereby information embedded in the model structure (the dependencies, connections, compositions) drives the automated processes.

The TC received three Statements of Use from Ericsson, Tal Liron, and Chris Lauwers [3].

The candidate specification and related files are available here:

TOSCA Version 2.0 

Committee Specification 01 

5 December 2024 

https://docs.oasis-open.org/tosca/TOSCA/v2.0/cs01/TOSCA-v2.0-cs01.md (Authoritative)

https://docs.oasis-open.org/tosca/TOSCA/v2.0/cs01/TOSCA-v2.0-cs01.html

https://docs.oasis-open.org/tosca/TOSCA/v2.0/cs01/TOSCA-v2.0-cs01.pdf

For your convenience, OASIS provides a complete package of the prose specification and related files in a ZIP distribution file. You can download the ZIP file at:

https://docs.oasis-open.org/tosca/TOSCA/v2.0/cs01/TOSCA-v2.0-cs01.zip

Members of the TOSCA TC approved this specification by Special Majority Vote [2]. The specification had been released for public review as required by the TC Process.

Public Review Period

The 60-day public review is now open and ends 27 May 2025 at 23:59 UTC.

This is an open invitation to comment. OASIS solicits feedback from potential users, developers and others, whether OASIS members or not, for the sake of improving the interoperability and quality of its technical work.

Comments may be submitted to the project by any person through the use of the project’s Comment Facility. Members of the TC should submit feedback directly to the TC’s members-only mailing list. All others should follow the instructions listed here.

All comments submitted to OASIS are subject to the OASIS Feedback License, which ensures that the feedback you provide carries the same obligations at least as the obligations of the TC members. In connection with this public review  we call your attention to the OASIS IPR Policy [4] applicable especially [5] to the work of this technical committee. All members of the TC should be familiar with this document, which may create obligations regarding the disclosure and availability of a member’s patent, copyright, trademark and license rights that read on an approved OASIS specification.

OASIS invites any persons who know of any such claims to disclose these if they may be essential to the implementation of the above specification, so that notice of them may be posted to the notice page for this TC’s work.

========== Additional references:

[1] OASIS TOSCA TC

[2] Approval ballot

[3] Links to Statements of Use

Ericsson: https://groups.oasis-open.org/higherlogic/ws/public/document?document_id=72612&wg_id=f9412cf3-297d-4642-8598-018dc7d3f409 Tal Liron: https://groups.oasis-open.org/higherlogic/ws/public/document?document_id=72634&wg_id=f9412cf3-297d-4642-8598-018dc7d3f409 Chris Lauwers: https://groups.oasis-open.org/higherlogic/ws/public/document?document_id=72635&wg_id=f9412cf3-297d-4642-8598-018dc7d3f409

[4] https://www.oasis-open.org/policies-guidelines/ipr/

[5] https://www.oasis-open.org/committees/tosca/ipr.php

Intellectual Property Rights (IPR) Policy

The post Invitation to comment on TOSCA Version 2.0 before call for consent as OASIS Standard appeared first on OASIS Open.


FIDO Alliance

Passkeys: The Journey to Prevent Phishing Attacks

This white paper is part of a three-part series on preventing phishing attacks through passkey deployment: Making your services phishing-resistant takes more than one day because you are not just […]

This white paper is part of a three-part series on preventing phishing attacks through passkey deployment:

Part 1: Overview – Introduces the concepts of a passkey journey toward phishing prevention. Part 2: Partial prevention – Details strategies for enforcing passkeys in specific scenarios. Part 3: Full prevention – Explains how to achieve comprehensive phishing resistance.

Making your services phishing-resistant takes more than one day because you are not just adopting a new phishing-resistant authentication method. It is a journey with multiple stages where you improve security by strengthening account login and recovery processes. This paper outlines the passkey journey and defines the authentication and recovery requirements for each stage.

Audience

Relying parties and developers who want to protect their applications from phishing attacks by adopting passkeys.

You can read the white papers on Passkey Central or use the following buttons to download PDF versions.

Part 1: Overview

Introduces the concepts of a passkey
journey toward phishing prevention.

Download Part 1. Overview Part 2: Partial Prevention

Details strategies for enforcing passkeys
in specific scenarios.

Download Part 2. Partial Prevention Part 3: Full Prevention

Explains how to achieve comprehensive
phishing resistance.

Download Part 3. Full Prevention

ResofWorld

U.S. tariffs threaten Mexico’s booming EV sector

Despite the government's efforts to protect local businesses, geopolitical tensions could derail Mexico's ambitions to become a major EV manufacturing hub.
As the presidents of Mexico and the U.S. engaged in a diplomatic spat over tariffs earlier this month, one Mexican politician sprang into action. Samuel García, the governor of Nuevo...

Thursday, 27. March 2025

Velocity Network

Digital Credentials: Enhancing Trust, Expanding Opportunity, Enabling Mobility

The post Digital Credentials: Enhancing Trust, Expanding Opportunity, Enabling Mobility appeared first on Velocity.

Oasis Open

OpenC2 JSON Abstract Data Notation (JADN) Version 2.0 CSD01 – Available for comment

 OASIS and the OpenC2 TC are pleased to announce that OpenC2 JSON Abstract Data Notation (JADN) Version 2.0 is now available for public review and comment.  JSON Abstract DataNotation (JADN) is an information modeling language based on Unified Modeling Language (UML) logicalDataTypes, used to both express the meaning of data items at a conceptual level […] The post OpenC2 JSON Abstract

public review ends April 28th

 OASIS and the OpenC2 TC are pleased to announce that OpenC2 JSON Abstract Data Notation (JADN) Version 2.0 is now available for public review and comment. 

JSON Abstract DataNotation (JADN) is an information modeling language based on Unified Modeling Language (UML) logicalDataTypes, used to both express the meaning of data items at a conceptual level and formally define andvalidate instances of those types. JADN uses information theory to define logical equivalence, which

enables representation of essential content in a wide range of formats and ensures translation among representations without loss.

The documents and all related files are available here:

OpenC2 JSON Abstract Data Notation (JADN) Version 2.0

Committee Specification Draft 01

19 February 2025

Editable source:

https://docs.oasis-open.org/openc2/jadn/v2.0/csd01/jadn-v2.0-csd01.md (Authoritative)

HTML:

https://docs.oasis-open.org/openc2/jadn/v2.0/csd01/jadn-v2.0-csd01.html

PDF:

https://docs.oasis-open.org/openc2/jadn/v2.0/csd01/jadn-v2.0-csd01.pdf

For your convenience, OASIS provides a complete package of the specification document and any related files in a ZIP distribution file. You can download the ZIP file at:

https://docs.oasis-open.org/openc2/jadn/v2.0/csd01/jadn-v2.0-csd01.zip

How to Provide Feedback

OASIS and the OpenC2 TC value your feedback. We solicit input from developers, users and others, whether OASIS members or not, for the sake of improving the interoperability and quality of its technical work.

The public review is now open and ends April 28, 2025 at 23:59 UTC.

Comments may be submitted to the project by any person through the use of the project’s Comment Facility. Members of the TC should submit feedback directly to the TC’s members-only mailing list. All others should follow the instructions listed here

Please note, you must log in or create a free account to see the material. Please contact the TC Administrator (tc-admin@oasis-open.org) if you have any questions regarding how to submit a comment.

All comments submitted to OASIS are subject to the OASIS Feedback License, which ensures that the feedback you provide carries the same obligations at least as the obligations of the TC members. In connection with this public review, we call your attention to the OASIS IPR Policy [1] applicable especially [2] to the work of this technical committee. All members of the TC should be familiar with this document, which may create obligations regarding the disclosure and availability of a member’s patent, copyright, trademark and license rights that read on an approved OASIS specification. 

OASIS invites any persons who know of any such claims to disclose these if they may be essential to the implementation of the above specification, so that notice of them may be posted to the notice page for this TC’s work.

Additional information about the specification and the OpenC2 TC’s can be found at the TC’s public home page: https://www.oasis-open.org/committees/openc2/

Additional references:

[1] https://www.oasis-open.org/policies-guidelines/ipr/

[2] http://www.oasis-open.org/committees/openc2/ipr.php

Intellectual Property Rights (IPR) Policy

The post OpenC2 JSON Abstract Data Notation (JADN) Version 2.0 CSD01 – Available for comment appeared first on OASIS Open.


EdgeSecure

Bridging Ideas and Institutions: The Power of Collaboration at AAC&U’s Annual Meeting

The post Bridging Ideas and Institutions: The Power of Collaboration at AAC&U’s Annual Meeting appeared first on NJEdge Inc.

In 2024, the American Association of Colleges and Universities (AAC&U) became the first Affiliate Partner to join the EdgeMarket cooperative. This partnership provides AAC&U and its over 800 members with access to a wide range of services and solutions through EdgeMarket’s streamlined procurement platform. These services include advanced technology, cloud computing, digital transformation, and cybersecurity solutions, all designed to streamline the procurement process. By providing colleges and universities with access to competitively priced products and services, the collaboration aims to reduce both costs and administrative burdens, making it easier for institutions to acquire the tools they need to support their initiatives.

In addition to these services, the partnership focuses on addressing the critical issue of digital equity in higher education. With Edge’s support, AAC&U is working to bridge the technological gaps that can hinder educational access, ensuring institutions have the resources to provide equitable access to high-speed internet, devices, and digital learning tools. This initiative is especially important for reducing socioeconomic disparities in higher education, supporting AAC&U’s broader mission to foster a more inclusive and equitable academic environment. The strategic alignment between Edge and AAC&U reflects a shared commitment to advancing higher education through innovation, efficiency, and a focus on improving digital access and student success.

Connecting with AAC&U Members AAC&U held their annual meeting on January 22-24, 2025, where Edge proudly served as a sponsor and hosted a networking reception.

The sessions included explorations of education and artificial intelligence, contemporary global issues, leading change on campus, and supporting equity and inclusion. “One of the strongest takeaways from this conference was how supportive the attendees are of the nonprofit mission and the method of connecting with others,” says Josh Gaul, Associate Vice President and Chief Digital Learning Officer, Edge. “The people attending and engaging with AAC&U truly believe in what they do, and they see the annual meeting as a valuable and beneficial experience. The event also served as an in-person kickoff to our partnership, providing Edge with the opportunity to introduce ourselves and engage directly with AAC&U members, fostering meaningful connections from the start. Many attendees seemed drawn to our services because they directly align with the academic needs of higher education and offered valuable solutions that resonate with their priorities.”

 “One of the strongest takeaways from this conference was how supportive the attendees are of the nonprofit mission and the method of connecting with others. The people attending and engaging with AAC&U truly believe in what they do, and they see the annual meeting as a valuable and beneficial experience. The event also served as an in-person kickoff to our partnership, providing Edge with the opportunity to introduce ourselves and engage directly with AAC&U members, fostering meaningful connections from the start. Many attendees seemed drawn to our services because they directly align with the academic needs of higher education and offered valuable solutions that resonate with their priorities.”

— Josh Gaul
Associate Vice President and Chief Digital Learning Officer, Edge

Edge’s Member Engagement Manager, Erin Brink, adds, “Several AAC&U members shared that they found our resources unique and were surprised to learn that something like our consortium existed. The nonprofit nature of our organization enhanced our credibility, and they could see we’re here to genuinely help and foster meaningful partnerships that could support institutions in their missions. They also appreciated learning about EdgeCon and the community events that we host, which added another layer of interest in what we do. Partnerships within the nonprofit sector are essential, especially with the growing demand for support in these challenging times for higher education. Many institutions expressed how valuable it is to have a resource like Edge, offering thought leadership and team support in critical areas such as digital learning, instructional design, and cybersecurity. As we continue to expand our footprint nationwide, we’re excited to have made some new connections and help even more organizations thrive in the ever-evolving landscape of higher education.”

“Several AAC&U members shared that they found our resources unique and were surprised to learn that something like our consortium existed. The nonprofit nature of our organization enhanced our credibility, and they could see we’re here to genuinely help and foster meaningful partnerships that could support institutions in their missions.”

— Erin Brink
Member Engagement Manager, Edge

As a mission-focused non-profit technology consortium, Edge is committed to serving the needs and advancing the interests of our members and their peers throughout the U.S. The EdgeMarket Affiliate Partner Program is built on the principle of collaboration, fostering meaningful partnerships with institutions to combine strengths and create innovative solutions that can benefit the higher education community.

This program is open to a variety of organizations, including RENs, particularly those lacking the resources to operate their own co-op, as well as associations that may not have a significant marketplace offering or diversified revenue streams. The program also extends to Historically Black Colleges and Universities (HBCUs), their associations, and consortia, as well as faith-based education associations and consortia. Additionally, education and healthcare systems and similar organizations are welcome to join who can benefit from the collective opportunities and resources provided through the program.

Once the partnership is in place, EdgeMarket will onboard the Affiliate Partner’s members who complete the participation agreement, ensuring a smooth and easy process. Each quarter, EdgeMarket will provide the Affiliate Partner with detailed reports, highlighting new co-op members, contract activity, and any associated fees. The Affiliate Partner will also receive regular quarterly payments based on their share of the contract fees. To ensure the program is always improving and delivering maximum value, EdgeMarket and the Affiliate Partner will regularly meet to review progress, fine-tune the program, and explore new opportunities for growth and collaboration.

Learn more about the EdgeMarket Affiliate Partner Program and how to join at edgemarket.njedge.net/home/edgemarket-affiliate-partner-program.

The post Bridging Ideas and Institutions: The Power of Collaboration at AAC&U’s Annual Meeting appeared first on NJEdge Inc.


Leveraging AI for Universal Design for Learning in Higher Education

The post Leveraging AI for Universal Design for Learning in Higher Education appeared first on NJEdge Inc.

To help individuals enrich their tech-savvy journey, the EdgeCast series includes videos that share invaluable insights, tips, and tricks to navigating through an ever-evolving world of technology. A recent addition to this series was Leveraging AI for Universal Design for Learning (UDL) in Higher Education hosted by Josh Gaul, Associate Vice President & Chief Digital Learning Officer, Edge. He was joined by Dr. Laura Romeo, Director of Learning Innovation, Development, and Scholarship, Edge, and Jaimie Dubuque, former Teaching and Learning Technologist, Rider University.

Diving into higher education last November, Jaimie Dubuque joined Rider University after ten years as a classroom teacher and inclusive education facilitator in southern New Jersey. In this position, her primary responsibility was supporting school districts and including students with disabilities in the general education classroom. “My master’s degree is in curriculum and instruction, and I have experience as an English teacher,” shares Dubuque. “I’m also a parent of a child with ADHD, so I have a personal investment in topics like Universal Design for Learning (UDL) and the potential that AI has to unlock many barriers that exist for kids like my son.”

Like Dubuque, Dr. Romeo’s career path began in the classroom. After being an elementary teacher for seven years, she returned to school to get her master’s in instructional design and technology. “I became a curriculum specialist and started to see some discrepancies in the classroom with curriculum and how things are designed for student learning needs,” explains Romeo. “For that purpose, I wanted to do more research and dig into how we can overcome this one-size-fits-all learning approach by making UDL more mainstream.”

 

Putting UDL into Practice
Universal Design for Learning is an educational framework that aims to create flexible learning environments and instructional methods to accommodate diverse learners. By using these guidelines, institutions can provide multiple means of representation, action, expression, and engagement. “Putting the UDL framework into practice allows you to use a variety of media formats like video, images, and audio, provide alternatives to visual information through captions and descriptive audio, and incorporate tools like text to speech, speak to text, and digital note taking,” says Gaul. “Designing flexible, customizable learning environments that can be adopted and adapted to individual preferences and abilities helps organizations emphasize relevant, meaningful and authentic learning experiences that foster motivation and engagement.”

In partnership with Seton Hall University, Edge hosted the inaugural AI Teaching & Learning Symposium in June of 2024 to explore the impact of AI on teaching, learning and the student experience. Dubuque was a presenter at this event and led a discussion on how AI can be leveraged in teaching and learning to enhance accessibility, as well as the practical strategies for implementing AI in planning. “No matter if you work at the front office of an elementary school or in the upper echelon of administration, one of the most common things you hear teachers say is, I don’t have time,” says Dubuque. “Tools like ChatGPT are going to be essential for maximizing our time and resource investments within schools without spending additional money.

“AI tools like ChatGPT can quickly turn teachers’ ideas and inspiration into customized, relevant content for their classrooms,” continues Dubuque. “A lesson plan that still has a one-size-fits-some approach can be transformed to incorporate elements of UDL principles. The request could be general, or an instructor could be more specific and ask for guided notes outline for students to use during their discussion. All of this can happen without spending additional resources.”

“AI tools can help generate multiple formats that cater to different learning styles, but not everyone is aware of their availability and capabilities. Getting the word out and sharing real-life examples and model demonstrations will be essential to expanding its reach into more educators’ classrooms and helping teachers know how to use these tools more effectively.”

— Laura Romeo, Ph.D.
Director of Learning Innovation, Development, and Scholarship, Edge

As someone who also recognizes the benefits of using AI platforms like ChatGPT, Romeo says this tool can streamline content creation and refine content to ensure it aligns with UDL strategies and the needs of specific students. “AI tools can help generate multiple formats that cater to different learning styles, but not everyone is aware of their availability and capabilities. Getting the word out and sharing real-life examples and model demonstrations will be essential to expanding its reach into more educators’ classrooms and helping teachers know how to use these tools more effectively.”

Generating Educational Resources
In addition to writing a quick lesson plan, AI can be used to generate different educational resources. “ChatGPT can be used to create a variety of models to supplement a subject or lesson plan,” explains Dubuque. “For example, an instructor could use the platform to generate a PowerPoint presentation or slide that students can interact with on their own. Being able to control your own learning by dictating when a slide is moved and not moved is empowering for students, no matter their age or ability level. A teacher can also generate multiple worksheets that hit on those same UDL standards.”

“Another piece of this puzzle is students graduating from college and becoming first-year teachers,” continues Dubuque. “In some cases, it’s the wild west and they’re entering school districts that do not necessarily have a set curriculum. Or on the other hand, students are being handed a scripted curriculum that they’re expected to follow to a tee. Both of these scenarios are dangerous for education, and we must find the sweet spot where instructors can generate content that still aligns with the curriculum but can be customized for different learning styles. For example, an English teacher could use ChatGPT to generate response questions that students will answer in a small group. They can think how they want to generate conversation questions and create different levels of questions that provide more scaffolding for students to either elevate the level of the work they’re already doing or help them reach their grade level expectations. Most importantly, we’re able to create a shared experience in our classroom communities and help students at different levels and abilities to succeed.”

Conversing with AI
When using AI tools like ChatGPT, the goal is to provide specific prompts that will generate the most relevant and useful responses to the user. However, without practice, knowing how to effectively engage with ChatGPT can be challenging. “To get the most out of this tool, I’ve found asking open-ended questions followed by more tailored follow-up questions based on the solutions is a good approach,” says Romeo. “I also look back at my learning outcomes that I’m trying to align a lesson or content with and see if they match. Having authentic conversations with this AI component helps improve my productivity and allows me to personalize learning in new and creative ways.”

Dubuque agrees that open-ended questions are useful and says she approaches ChatGPT as a coach-athlete relationship, where the AI platform is functioning as the coach. “At the end of the day, you’re the athlete with the ball in your hands and you have to make the game time decision of what you’re going to do. Open-ended questions can provide ideas for particular scenarios or how you can ensure you’re meeting the needs of students who struggle with time management. From there, you can give it the parameters of your class. So, while I might be more generic in my questions around how I can incorporate UDL elements of action and expression in a particular lesson plan, I can then be more specific about how to do that in a 45-minute timeframe.”

“Many schools are concerned about students using AI inappropriately, but instead of focusing on how students can use this tool to cheat, we should look at how we can adapt assessments to reduce the need for plagiarizing,” continues Dubuque. “How do we put up guardrails without the need to police AI usage? The best way to prevent cheating is to keep the human element in instruction and talk to students along the way. Whether you teach a fully online class where you’re communicating over a learning management system (LMS) or on Zoom talking directly to a student, or in a classroom and seeing students in person, there are checkpoints in your instruction that can help keep students on track and accountable for their work.”

Institutions who are working to remove barriers to learning can make assessments more accessible and offer different formats, including oral, written, or visual options to demonstrate student understanding. “The key is for institutions to incorporate AI tools into their practice and show students how to use these productive knowledge constructs properly and in a way that will benefit them,” says Romeo. “Students can use AI to refine their ideas and enhance a portfolio of work that has videos, creative writing, and data analysis.”

Establishing Policies and Procedures
As stated in Rider University’s code of academic integrity, learning, teaching, and scholarship cannot be conducted in an atmosphere of dishonesty. To help establish policies and procedures, review allegations of academic misconduct, and promote academic ethics, the University has formed an Academic Integrity Committee. “Our academic integrity task force is currently looking at AI and how these tools and technology fit into our current codes of conduct and within different areas of study,” shares Dubuque. “The misuse of AI we’re seeing still falls under our current policies around cheating and plagiarism. In addition, our Teaching and Learning Center offers several workshops that explore AI, including a session that looks at the ways artificial intelligence can be used to embed the UDL framework into instruction.”

In working with Edge member institutions, Romeo says many organizations have clear guidelines set around their AI usage and these standards are outlined in course syllabi. “Taking this approach encourages transparency and ethical use of AI because you’re creating an open channel of communication between the instructor and the student. Students understand what resources they can use and how to use them. We’re also seeing many departments with specific instructors who not only regulate the use of AI, but are also trying to educate themselves and their faculty and students on how to use these tools responsibly.”

“From an instructional design perspective, there are bigger conversations starting about what resources are out there, how they’re being used, and how AI can be utilized ethically and effectively,” continues Romeo. “Many institutions are exploring whether to standardize AI usage across the entire organization or across individual departments. The AI-powered teaching assistant, Khanmigo, for example, can offer guidance to students to help them learn different subjects and can assist teachers in lesson planning. Tools like this are not moving the goalpost, they’re leveling the playing field for students. Since we’re at the starting point of understanding AI in education, the line is going to move and blur as we continue to have conversations about this topic and determine the best path forward.”

“Keeping student data private and secure is of the utmost importance, so any information put into an AI platform must be kept generic. As educators, designers, and instructors, we need to think about when using AI is appropriate and when it is not and look at this technology with a critical eye. AI can be very helpful when drafting course materials or to help spark creative ideas, but the final review of the content should always be completed by the educator to ensure integrity. We also must consider where the content goes once it is entered into an AI platform. For students that utilize it as a tool, can they close the computer and have an extensive conversation about the material? If not, they do not have ownership or accountability, and they need to go back and review the information. I anticipate that this is going to be an interesting conversation between unions, administrators, and school boards and thinking about what it means to use AI as an instructor and when it’s considered appropriate and when it’s not.”

— Jaime Dubuque
Former Teaching and Learning Technologist, Rider University

Protecting Privacy and Intellectual Property
When using AI, educators will need to be mindful of the data that these tools collect and how that information is used. “Keeping student data private and secure is of the utmost importance, so any information put into an AI platform must be kept generic,” says Dubuque. “As educators, designers, and instructors, we need to think about when using AI is appropriate and when it is not and look at this technology with a critical eye. AI can be very helpful when drafting course materials or to help spark creative ideas, but the final review of the content should always be completed by the educator to ensure integrity. We also must consider where the content goes once it is entered into an AI platform. For students that utilize it as a tool, can they close the computer and have an extensive conversation about the material? If not, they do not have ownership or accountability, and they need to go back and review the information. I anticipate that this is going to be an interesting conversation between unions, administrators, and school boards and thinking about what it means to use AI as an instructor and when it’s considered appropriate and when it’s not.”


Part of Dubuque’s primary role is offering Canvas support and helping faculty members use the LMS to deliver dynamic learning experiences. “Teachers are looking for ways to use Canvas to create modules or add a rubric to an assignment. I’ll often offer helpful tips for using ChatGPT to create a course outline or course objectives based on their syllabus and show them the possibilities of AI tools. We’re able to start removing the stigma that AI is going to replace instructors, but rather can free up time so they can do other things that will enhance their classes and support their students. They’re able to be more thoughtful about their assessments, spend time giving authentic feedback, and align their instruction with UDL principles.”

“Artificial intelligence is not going to replace our human creativity or judgment, it is a tool that can enhance the teaching and learning experience when used properly and ethically. I’ve tried to remain curious about AI and continue to experiment with it, while keeping students’ needs and best learning principles in mind. Ultimately, it is our responsibility as educators to wield this tool thoughtfully, ensuring it enriches rather than diminishes the educational journey.”

— Laura Romeo, Ph.D.
View From The Edge

“If used thoughtfully, AI tools can help students organize their thinking and provide additional clarification on certain topics and content-specific vocabulary,” continues Dubuque. “It’s important to acknowledge that students with learning disabilities will still have those support needs when heading to college and AI tools like ChatGPT may start appearing as accommodations in students’ Individualized Education Programs (IEPs) and 504 plans. AI can provide the ability to break down some large-scale assignments into more manageable tasks and help students be more successful in their classes.”
Romeo adds, “Artificial intelligence is not going to replace our human creativity or judgment, it is a tool that can enhance the teaching and learning experience when used properly and ethically. I’ve tried to remain curious about AI and continue to experiment with it, while keeping students’ needs and best learning principles in mind. Ultimately, it is our responsibility as educators to wield this tool thoughtfully, ensuring it enriches rather than diminishes the educational journey.”

To hear more of this conversation and other thought-provoking discussions about the evolving world of technology, visit njedge.net/edgecast

The post Leveraging AI for Universal Design for Learning in Higher Education appeared first on NJEdge Inc.


Transforming Campus Operations Through IT Virtualization

The post Transforming Campus Operations Through IT Virtualization appeared first on NJEdge Inc.

As higher education institutions face mounting financial pressures, the landscape of American academia is undergoing a significant transformation. A combination of factors has left many colleges and universities grappling with tighter budgets and increased financial strain. Particularly for schools in regions hit hardest by the “enrollment cliff,” these challenges have led to a shrinking pool of tuition revenue, forcing many institutions to re-evaluate their funding models. Compounding this situation are the lasting effects of the pandemic, which has strained resources even further.

Adding to this financial pressure is the growing disparity between institutions, where wealthier universities are often better positioned to weather the storm, while smaller, tuition-reliant colleges may struggle to find a path forward. In recent years, there has been a rise in college closures, mergers, and consolidations—especially among private institutions. In the face of these ongoing challenges, many academic institutions are turning to virtualized and outsourced information technology solutions to reduce operational costs, enhance efficiency, and remain current with technology trends and compliance requirements.

Economic Factors

Maintaining a robust IT infrastructure and keeping up with the fast-paced evolution of technology can be expensive for academic institutions. Add in the cost of supporting the salaries, training, equipment, and software of a fully staffed, in-house IT department, and an organization may find themselves stretched thin. Several key factors are contributing to these economic challenges, forcing organizations to make difficult decisions about resource allocation and scaling back on vital technological advancements and upgrades.

Declining Enrollment Rates

Recent trends show a declining number of college-age adults throughout the country, as well as some potential students questioning the return on investment of higher education. With this shift, many institutions have seen a decrease in enrollment rates, making it even more challenging to attract students to their campus. As a result, institutions face a direct reduction in tuition revenue, and are forced to navigate financial constraints that can affect all areas of their institution, including critical sectors like IT.

Rising Operational Costs

Across the higher education community, institutions are facing escalating operational costs in nearly every area. Rising expenses for campus maintenance, utilities, healthcare benefits, and various other services are straining budgets and leading organizations to make difficult decisions. Information technology, in particular, represents a large and growing expenditure due to the ongoing need for software licenses, hardware updates, cybersecurity measures, and employee training. As financial pressures mount across the institution, IT departments are under pressure to find cost-saving strategies while still meeting the demands of students, faculty, and staff.

Reductions in Funding

Political shifts and economic recessions have led to decreased state appropriations for higher education, and many public institutions have seen a downtick in state and federal funding. Without these vital financial resources, institutions are facing difficult budgetary decisions around essential services and looking for cost-effective and innovative ways to maintain their technology infrastructure.

Shifting of Learning Models Spurred by the pandemic, institutions across the country were forced to transition to remote learning almost overnight, diverting significant resources toward digital infrastructure to maintain continuity in education. This rapid shift required substantial investment in technology, software, and staff development, often at the expense of other critical areas. With federal support for these costs now behind us, institutions are left to find the necessary budget dollars to sustain and enhance robust online and hybrid learning models that meet the evolving expectations of both students and staff.

Advancing Technology

As technological advancements continue to accelerate, new opportunities are emerging on the horizon. However, to fully capitalize on and integrate innovations like artificial intelligence, data analytics, and cloud computing, ongoing investments are essential for institutions to stay ahead and remain competitive. Finding room in an already tight budget for new software tools, hardware updates, and skilled personnel can be challenging, requiring institutions to explore alternative strategies to ensure they can adapt to the demands of an increasingly digital world.

Increasing Demand for Cybersecurity

Ransomware, malware and crypto mining are constant threats to any organization and with higher education institutions becoming increasingly vulnerable to cyberattacks, protecting sensitive data and digital assets are essential. As these threats evolve, maintaining up-to-date cybersecurity measures becomes more expensive, and many institutions find it difficult to allocate the necessary resources to fully protect their IT.

Maintaining Competitiveness

With a smaller pool of prospective students, institutions are under increasing pressure to stand out from the crowd. Students are looking for advanced classroom tools, immersive digital campus experiences, and technology-driven learning spaces when choosing where to apply. Developing and maintaining these high-tech environments requires ongoing investments, and many institutions find it challenging to strike a balance between fostering innovation and managing financial constraints.

Evolving Student Expectations

In today’s digital age, students expect their institutions to offer a smooth, accessible online experience—complete with fast Wi-Fi, virtual collaboration platforms, easy access to academic resources, and secure online services available anytime, anywhere. To meet these expectations, colleges and universities must invest continually in their IT systems. Yet, with mounting financial challenges, delivering this level of service on a limited budget is no small feat. As a result, many institutions are turning to Virtual IT solutions, which provide a cost-effective way to enhance technological services while keeping expenses in check.

Lowering IT Operating Costs

As institutions face increasing pressure to innovate while managing strict budgets, the need for cost-effective solutions is more urgent than ever. The growing demand for advanced technology and expert support, however, often clashes with the financial realities of the industry. By partnering with external providers, institutions can benefit from economies of scale, efficient service delivery, robust security features, and 24/7 support. These providers can provide technology infrastructure that may otherwise be out of reach for smaller or budget-conscious institutions, making it easier to stay competitive in an increasingly digital world.

Virtualization enables institutions to consolidate servers, storage, and other IT resources, improving resource utilization and reducing the physical space and energy consumption required to support IT operations. By moving away from on-premise infrastructure, colleges and universities can minimize expensive maintenance and hardware costs. Through cloud-based platforms, institutions can scale their IT capabilities on demand and eliminate the need for costly upfront investments in hardware and software. This model allows organizations to pay only for what they use, making budgeting more predictable and offers the flexibility to align IT spending with actual needs. With today’s cloud-based applications handling everything from administrative functions to learning management and data analytics, schools can streamline operations and shift more resources toward their core mission: advancing education and research.

By outsourcing IT services and partnering with specialized providers, institutions can eliminate the expense of maintaining a large in-house IT team and tap into expert knowledge and advanced tools for critical functions such as software development, help desk support, and cybersecurity. With built-in updates, virtual IT solutions allow institutions to stay ahead of technological advancements and security threats, all while freeing up internal resources to focus on their core academic missions.

Virtualized IT Solutions and Services

By outsourcing certain IT functions and leveraging cloud-based systems, institutions can access the latest technology, streamline operations, and ensure robust cybersecurity practices without the financial burden of maintaining large in-house teams or hardware. This approach allows institutions to allocate resources more effectively and ensure that technology can support their core missions without compromising financial stability.

Managing IT Infrastructure

One of the most common forms of virtual IT is moving to cloud-based solutions for data storage, hosting, and software applications. Instead of maintaining expensive on-premise hardware and software, institutions can transition their IT infrastructure to the cloud and rely on providers like Amazon Web Services (AWS), Microsoft Azure, or Google Cloud to handle infrastructure management, maintenance, and updates. Moving to cloud-based solutions can help significantly reduce capital expenditures by eliminating the need for on-site hardware and ongoing maintenance costs, while offering flexible, usage-based pricing that aligns with actual demand.

Cloud platforms also provide unparalleled scalability, where institutions can adjust resources easily as needs grow, whether for storage, computing power, or specialized research applications. In addition, cloud providers prioritize security with advanced features like multi-factor authentication, encryption, and compliance with industry standards, alongside robust disaster recovery solutions to safeguard institutional data. By outsourcing infrastructure management, institutions can redirect resources and IT staff to focus on core priorities, such as enhancing education, supporting research, and meeting the needs of students and faculty.

Delegating Key IT Functions

Instead of maintaining a full-scale internal IT department, academic institutions can opt to outsource specialized IT functions to external providers. Services like network monitoring, help desk support, cybersecurity, and system administration can be handled by experts in these fields, allowing institutions to leverage the knowledge and resources of these partners. This approach not only ensures high-quality, efficient service but also allows the institution’s in-house IT team to concentrate on more critical projects, such as advancing teaching tools and supporting research innovations.

By partnering with managed IT providers, institutions can avoid the expenses of hiring, training, and retaining a full in-house team, while enjoying flexible pricing models that align with their budget. These services provide access to high-level expertise in areas like cybersecurity, cloud computing, and network infrastructure—fields where hiring full-time experts can be costly. As needs change, managed IT services offer scalability and give institutions the ability to adjust the level of support based on their current needs. Through specialized services, institutions can count on proactive monitoring and risk management to avoid costly breaches and penalties and ensure regulatory compliance.

Tapping into Remote Workforces

Virtual IT teams offer academic institutions the opportunity to access a global talent pool where they can hire skilled professionals from regions with lower labor costs. By recruiting remote staff for roles such as system administrators, software developers, or technical support agents in countries with more affordable wages, institutions can cut personnel expenses while maintaining high levels of expertise and service quality.

By sourcing talent from regions with lower wage standards, institutions can also reduce the funds required for salaries, benefits, and other employment costs and free up these resources for other priorities like infrastructure or academic programs. Access to a global talent pool also enables institutions to find specialized expertise at competitive rates, from cybersecurity professionals in India to software developers in the Philippines. Additionally, remote teams can provide scalability and flexibility and empower institutions to quickly adjust staffing levels based on project needs or peak demand without the long-term commitment of full-time hires. With a distributed workforce, institutions can achieve 24/7 support and continuous system monitoring, ensuring uninterrupted service for students, faculty, and staff. Most importantly, remote teams can bring diverse perspectives that foster innovation and creative problem-solving, enhancing the institution’s ability to address technological challenges.

Cutting-Edge Research Infrastructure

Advanced research requires robust IT infrastructure, often involving complex computing resources that can be costly to build and maintain. Rather than investing heavily in creating dedicated research environments, institutions can partner with external research organizations or cloud-based platforms that offer specialized resources, such as high-performance computing (HPC) and machine learning capabilities. This approach allows institutions to access powerful tools and optimize their research capabilities without the need for large upfront investments.

By partnering with cloud services or external research organizations, institutions can avoid the high capital costs of building and maintaining specialized environments. With pay-as-you-go models, they only pay for the resources they use, reducing upfront investments and ongoing maintenance costs. Outsourcing also provides access to cutting-edge technology, including the latest advancements in quantum computing, machine learning, and artificial intelligence, without the need for heavy internal investment. Additionally, the scalability and flexibility of outsourced R&D infrastructure allow institutions to adjust resources according to fluctuating project demands, ensuring they only pay for what they need. By shifting the burden of managing complex systems to external providers, academic institutions can focus more on core research activities, such as experimentation and data analysis, while also benefiting from collaboration with external experts and organizations.

Collectively Paying for IT Infrastructure

By leveraging shared-cost models, where multiple schools collaborate to jointly fund IT infrastructure, services, and support, institutions can access enterprise-level technology and services at a significantly reduced cost. These shared-cost models can extend to staffing and help smaller institutions to tap into specialized expertise by partnering with larger universities or third-party providers, ensuring access to high-level technical skills that might otherwise be out of reach.

By pooling resources and sharing financial responsibility with a network of institutions, schools can access enterprise-level technology and services at a fraction of the cost, making high-end solutions, such as advanced cloud computing, cybersecurity, and data storage, more affordable for smaller institutions. This approach also democratizes access to specialized infrastructure and expertise and gives more institutions the ability to tap into critical resources and technical skills, such as cybersecurity or cloud architecture. The flexibility of shared-cost models enables institutions to scale services and infrastructure as their needs evolve and share the financial burden of expansion across multiple schools. These models also foster collaboration by creating opportunities for institutions to exchange knowledge, solve common IT challenges together, and collaborate on research initiatives to drive innovation and growth.

Tapping into Existing Infrastructure

Many academic institutions and colleges are sitting on a valuable resource: their existing fiber optic networks. These extensive, often underutilized connections between campuses and facilities can be the key to unlocking greater efficiency and cost savings. Rather than investing in additional infrastructure, institutions can leverage their existing wide-area network (WAN) fiber backbone to enhance IT capabilities. By using this infrastructure to support virtual IT solutions, such as cloud services, remote support, and distributed computing, colleges and universities can maximize their existing technology investments and reduce the need for expensive new systems. This approach not only streamlines operations but also ensures that resources are put to their best use, creating a smarter, more sustainable way forward.

By avoiding the expenditures tied to building new networks or data centers, institutions can reallocate funds to other strategic priorities like research and student services. The high-speed, high-capacity nature of fiber networks allows for better use of existing resources and minimizes the need for new systems. Fiber networks also offer the flexibility to scale IT services based on demand, supporting growth without heavy new investments. With fast, reliable connectivity, fiber ensures high performance for virtual services and enhances online learning, research, and collaboration. To further help future-proof IT infrastructure, fiber has the ability to handle data-intensive applications and enable the adoption of emerging technologies like AI and big data without constant infrastructure overhauls.

Pooling Purchasing Power

Higher education institutions, especially those located near one another or part of collaborative academic networks, can unlock significant cost-saving opportunities by pooling their purchasing power. By joining forces in procurement efforts, these institutions can negotiate better deals on essential IT resources like software licenses, cloud services, and hardware such as servers and data storage. Collaborative buying enables access to volume discounts, shared services agreements, and bundled pricing that not only reduces costs but also opens the door to advanced solutions that might otherwise be out of reach for individual institutions. Colleges and universities can maximize their budgets while still making strides in enhancing their technological capabilities.

Through cooperative purchasing, institutions can secure volume discounts, shared services agreements, and bundled pricing, helping to stretch their budgets further while accessing more sophisticated IT infrastructure, software, and services. Smaller schools, in particular, can gain access to enterprise-level solutions like high-capacity data storage and cybersecurity tools that would otherwise be unaffordable. Through this collaboration, institutions gain negotiating leverage and can secure better pricing and enhanced support from vendors. The cooperative model also streamlines procurement by reducing administrative complexity and creating collective agreements, while fostering collaboration and the exchange of best practices among institutions.

Leveraging as-a-Service Solutions

A growing trend in virtual IT models is the adoption of “as-a-Service” solutions, such as Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), and Infrastructure-as-a-Service (IaaS), which allow academic institutions to access powerful technology tools and services without the need for upfront investment or the complexities of in-house management. By utilizing these solutions, institutions can easily integrate learning management platforms, student information systems, or research tools, all while minimizing the burden of internal development and maintenance.

IaaS offerings, such as virtual servers and data storage, provide the flexibility to scale infrastructure on-demand and only pay for the resources used. When institutions collaborate through procurement cooperatives, they can further reduce costs by securing volume pricing for these services, gaining access to enterprise-level technology while avoiding the administrative challenges of managing hardware, updates, and security internally.

By only paying for what they use, institutions avoid large upfront investments in software, hardware, and infrastructure, and can scale resources based on demand. Joining these procurement cooperatives further reduces costs through volume pricing and grants smaller institutions access to powerful, enterprise-level technologies like data analytics and advanced security tools, which may otherwise be unaffordable. “As-a-Service” solutions offer scalability and flexibility and help an organization to adjust resources for specific needs, such as peak enrollment periods or research projects. Since service providers handle updates, security, and infrastructure management, these models reduce complexity and maintenance and free up IT staff for more strategic tasks. As needs evolve, cloud-based solutions enable faster implementation and allow institutions to stay responsive and quickly deploy necessary systems to keep pace with these new demands.

Supplementing the Internal Workforce

Staff augmentation is an increasingly popular strategy for academic institutions adopting Virtual IT solutions. By supplementing an institution’s internal workforce with external professionals or specialized contractors for temporary projects or specific needs, colleges and universities can quickly scale their IT capacity and capabilities, without the long-term commitment and overhead costs associated with hiring full-time, permanent staff. By tapping into external expertise as needed, institutions can remain agile and efficiently address their evolving IT demands.

When external professionals are hired on a temporary basis, institutions only pay for the specific skills and time required, avoiding the fixed costs associated with permanent staff, such as salaries and benefits. This model also enables a rapid response to urgent IT needs, such as software rollouts, data migrations, or infrastructure upgrades, with experts brought in quickly to fill any gaps. Staff augmentation can also allow institutions to tap into a global talent pool and gain access to specialized skills that may be scarce or costly within their own region.

Scaling Storage Needs

For institutions wishing to focus on their core IT operations without the challenges of maintaining a physical data center, virtualized IT provides the opportunity to house servers, storage systems, and networking equipment in a secure, high-performance facility managed by a third-party provider. In this arrangement, institutions retain full control over their equipment and software but benefit from the provider’s expertise in managing the physical infrastructure, including power supply, cooling, security, and network bandwidth.

By leveraging the economies of scale of a colocation provider, smaller institutions can access high-quality data center services without the need for expensive on-campus infrastructure. This reduces capital investment and lowers ongoing costs associated with power, cooling, and staffing. Co-location facilities provide high levels of redundancy and ensure that critical IT systems remain operational during disruptions or hardware failures. The robust security measures implemented by co-location providers, such as 24/7 monitoring, biometric access controls, and strict regulatory compliance, also make this an ideal solution for institutions handling sensitive student and research data.

Embracing Technological Advancements

As higher education institutions navigate an increasingly complex financial and technological landscape, the need for innovative solutions has never been more pressing. Virtual IT offers a forward-thinking approach that can help institutions overcome their challenges while enhancing their technological capabilities. By utilizing cloud-based services, outsourcing key IT functions, and embracing virtualized infrastructures, institutions can significantly reduce operational costs, increase flexibility, and improve overall efficiency. This not only allows valuable resources to be reallocated toward their core educational objectives, but also ensures they remain agile and responsive to the ever-evolving demands of students, faculty, and the broader academic community.

In an environment where financial sustainability is critical, Virtual IT offers higher education institutions the opportunity to thrive in an increasingly competitive and technology-driven world. It transforms financial and operational challenges into opportunities for growth, ensuring that institutions can continue to fulfill their mission and drive innovation in the years ahead.

Discover how Virtual IT is transforming higher education by exploring The Edge Ecosystem of IT Virtualization Solutions and Services. Learn about the advantages, cost savings, and technological benefits of Edge’s virtualized IT solutions and how we can help your institution thrive.

The post Transforming Campus Operations Through IT Virtualization appeared first on NJEdge Inc.


Driving Agility Through Analytics: Advancing Data-Driven Decision Making in Higher Education

The post Driving Agility Through Analytics: Advancing Data-Driven Decision Making in Higher Education appeared first on NJEdge Inc.

With a background deeply rooted in social psychology, Nicole Muscanell’s career journey reflects the power of combining academic expertise with a passion for driving change in the education sector. Her graduate studies in social psychology not only shaped her intellectual foundation but also provided the insights that would guide her leadership roles throughout her career. “Social psychology is similar to sociology, but with a more focused approach, examining individual and small group behaviors rather than large societal structures,” explains Muscanell, Ph.D. Researcher, Research & Insights, EDUCAUSE. “This discipline delves into issues such as group relations, identity, race, social influence, and attitude change. During this time, I gained extensive research experience in quantitative social science and my graduate training allowed me to develop my skills as a researcher.”

While completing her postdoctoral degree and taking on the role of assistant professor at Penn State York, Muscanell conducted research and helped build their new psychology undergraduate program. After five years, she returned to her hometown of Orlando, Florida and began working as a Research Analyst at JHT Incorporated. While working for JHT, she was contracted to work for the Department of Defense to help evaluate their educational training programs at the Defense Equal Opportunity Management Institute (DEOMI). “DEOMI is in charge of training all active-duty members across the military branches in topics such as human relations and equal opportunity,” explains Muscanell. “I helped assess the effectiveness of their training programs, and it was an experience I truly enjoyed. Later in 2022, I wanted to return to my roots in higher education and joined EDUCAUSE as a researcher, heading up all aspects of research projects from design and data collection to analysis and publication.”

Exploring Emerging Trends
In her role at EDUCAUSE, Muscanell is a key member of the Research & Insights group, which plays a vital part in advancing the organization’s mission to transform higher education in service to a greater good. “Our goal is to help higher education professionals and institutions stay up to date with the latest trends, especially around technology and data, so they can make informed decisions and improve institutional success,” shares Muscanell. “To fulfill this mission, the research team conducts studies and produces research products focused on emerging topics, such as the 2024 EDUCAUSE AI Landscape Study published mid-February 2025. They also produce quick polls on timely, relevant subjects, allowing them to gather and share data rapidly.”

To help ensure that EDUCAUSE members stay ahead of trends and challenges in higher education, the team regularly publishes forward-thinking reports, like the Horizon reports and the “Top 10” issues in higher education IT. “These reports examine emerging trends and offer insights on how institutions can prepare for the future,” says Muscanell. “We’ve also been conducting several AI Landscape studies and try to focus on what might happen in the next few years and what steps can be taken now so we’re not blindsided. I recently shared some of these insights at EdgeCon where I explored the current landscape of analytics in 2024 and what that looks like for higher education.”

“Higher ed can be slow to change and the swift evolution of technology, especially with AI, can make it difficult for institutions to stay ahead and properly address knowledge gaps. When a new AI platform emerges unexpectedly, it can feel overwhelming for institutions that are already struggling to keep up. To mitigate these challenges, focus on scalable solutions and implementing technologies that are adaptable and can integrate well with existing systems. This helps avoid the need for complete infrastructure overhauls with every new development.”

— Nicole Muscanell, Ph.D.
Researcher, Research & Insights, EDUCAUSE

One of Muscanell’s notable contributions at EDUCAUSE is her authorship of the EDUCAUSE Analytics Landscape Study, a comprehensive report focused on advancing data-driven decision-making in higher education. Titled Advancing Data-Driven Decisions in Higher Education: Overcoming Barriers and Harnessing Analytics, the study explores the “current state of affairs” of analytics in higher education and the challenges institutions face in leveraging data effectively. When reflecting on her research, Muscanell says she was surprised by the contrast between the growing recognition of the importance of analytics in higher education and the slow progress in implementing the necessary infrastructure and actions to leverage them effectively. “While the recognition of analytics as a strategic priority has seen significant growth—rising from 28% in 2012 to 69% in 2024—the concrete steps needed to fully integrate analytics have not kept pace.”

Among the key findings from the 2024 EDUCAUSE Analytics Landscape Study, Muscanell found that the majority of respondents (79%) feel that their institutional leaders are interested in or are fully committed to analytics. The study also revealed that analytics are most commonly used to support key operational areas, including admissions, enrollment processes, and ensuring compliance with accreditation standards and regulatory obligations. In line with these operational uses, institutions primarily focus on analytics to enhance student success and outcomes, boost retention rates, and drive enrollment growth. When asked about adopting AI tools for analytics, approximately 35% of respondents indicated that their institution is making strategic efforts to incorporate AI into analytics, while 46% noted that these efforts involve collaboration between teams working on both analytics and AI strategy.

On January 9, 2025, Muscanell joined EdgeCon Winter 2025 to moderate the keynote panel discussion and share key insights from her Analytics Landscape Study. “This was my first time attending EdgeCon and I was really impressed by the event’s focus on innovation and forward-thinking solutions,” shares Muscanell. “I appreciated the balance between embracing emerging technologies like AI while maintaining a cautious and realistic approach. The discussions, both in breakout sessions and informal conversations, reflected an eagerness to explore how AI and other technologies could be used creatively and effectively in higher education. One of the highlights for me was an AI panel that concluded with a practical sharing of AI tools and their applications, which I found truly inspiring. What struck me most was the enthusiasm for technological advancement, but grounded in discussions about real-world challenges, like upskilling the workforce and preparing future generations for an AI-driven world.”

“This was my first time attending EdgeCon and I was really impressed by the event’s focus on innovation and forward-thinking solutions. I appreciated the balance between embracing emerging technologies like AI while maintaining a cautious and realistic approach. The discussions, both in breakout sessions and informal conversations, reflected an eagerness to explore how AI and other technologies could be used creatively and effectively in higher education. One of the highlights for me was an AI panel that concluded with a practical sharing of AI tools and their applications, which I found truly inspiring. What struck me most was the enthusiasm for technological advancement, but grounded in discussions about real-world challenges, like upskilling the workforce and preparing future generations for an AI-driven world.”

— Nicole Muscanell, Ph.D.
Researcher, Research & Insights, EDUCAUSE

Enhancing Analytics Capabilities
To address the challenges of implementing analytics in higher education, Muscanell suggests several practical steps that institutions can take even with resource constraints. “One key strategy is to prioritize and align analytics with institutional goals, focusing on high-impact areas like student success, operational processes, and finances. Start to strategize and plan where analytics fit in with your institutional goals. Professional development, upskilling, and reskilling staff are also critical to enhancing analytics capabilities. While hiring new personnel is often not feasible, investing in the personnel that you already have and expanding their knowledge and literacy can be hugely beneficial.”

To foster collaboration and help break down silos, Muscanell suggests developing cross-functional teams that bring together expertise from various departments. “Bringing people together from across the organization, including IT, academics, institutional research, and finance, creates a shared understanding of the challenges and opportunities that come with implementing analytics. This collaborative approach not only encourages diverse perspectives but also helps align goals across different departments, making it easier to develop cohesive strategies and drive meaningful change.”

“Since data governance is involved, I would also suggest taking a phased approach to implementation, ensuring that progress is made gradually and systematically, with clear plans and accountability metrics in place,” continues Muscanell. “For institutions with fewer resources, look into leveraging open-source tools like Python and R for analytics that can handle anything from descriptive statistics to machine learning. If your organization does not have staff with the appropriate expertise to mature your analytics, consider consulting with external experts even for a short time to help bridge staffing gaps. Most importantly, look to form partnerships with your peer institutions and determine ways to share resources and knowledge to help each other succeed in analytics and AI initiatives.”

Prioritizing Agility and Scalability
To address the challenges of implementing In a rapidly changing landscape, integrating AI and other advanced tools into higher education analytics can be extremely challenging. “Higher ed can be slow to change and the swift evolution of technology, especially with AI, can make it difficult for institutions to stay ahead and properly address knowledge gaps,” says Muscanell. “When a new AI platform emerges unexpectedly, it can feel overwhelming for institutions that are already struggling to keep up. To mitigate these challenges, focus on scalable solutions and implementing technologies that are adaptable and can integrate well with existing systems. This helps avoid the need for complete infrastructure overhauls with every new development.”

In addition, Muscanell advocates for building agile technology teams capable of responding quickly to changes. “Since technology is not stagnant, make your IT more agile by training staff and cross-functional teams to be able to respond swiftly to new developments. Also invest in ongoing digital literacy initiatives and creating a continuous learning environment for all faculty, staff, and students. This ensures everyone remains current as technology evolves. By prioritizing scalability, agility, and continuous learning, institutions can better navigate the rapid changes in AI and other technologies. By fostering agility through analytics and prioritizing scalability, institutions can unlock the full potential of data-driven decision-making and remain at the forefront of innovation in an ever-evolving world.”

“One key strategy is to prioritize and align analytics with institutional goals, focusing on high-impact areas like student success, operational processes, and finances. Start to strategize and plan where analytics fit in with your institutional goals. Professional development, upskilling, and reskilling staff are also critical to enhancing analytics capabilities. While hiring new personnel is often not feasible, investing in the personnel that you already have and expanding their knowledge and literacy can be hugely beneficial.”

— Nicole Muscanell, Ph.D.
Researcher, Research & Insights, EDUCAUSE

The post Driving Agility Through Analytics: Advancing Data-Driven Decision Making in Higher Education appeared first on NJEdge Inc.


ResofWorld

What you need to know about Africa’s first AI factory

Nvidia and Zimbabwean billionaire Strive Masiyiwa join hands to bring supercomputer technology to the continent.
Africa just scored a major tech coup. On March 24, Zimbabwean billionaire Strive Masiyiwa’s Cassava Technologies announced a partnership with Nvidia to build Africa’s first artificial intelligence factory. This isn’t...

Turkey’s brain drain is taking its female coders with it

As sexism persists in Turkey’s tech sector, women are leaving for less patriarchal workplaces abroad.
In the summer of 2013, Gülçin Yildirim, a database administrator, received an invitation for a meeting at an Istanbul chocolate shop with 11 other female coders. They were there to...

Wednesday, 26. March 2025

Velocity Network

Verifiable Credentials – Bringing Trust and Truth to Talent Acquisition

The post Verifiable Credentials – Bringing Trust and Truth to Talent Acquisition appeared first on Velocity.

Digital Identity NZ

Farewells, new beginnings and the constant that is change

Kia ora, A Farewell Message This is the final DINZ newsletter under my watch, but if the writing style seems familiar in April’s newsletter you’ll know that AI has been deployed or my successor hasn’t been fully onboarded 🙂. It’s been a blast, it really has! I’m satisfied with DINZ’s development under my tenure. No regrets. … Continue reading "Farewells, new beginnings and the constant that i

Kia ora,

A Farewell Message

This is the final DINZ newsletter under my watch, but if the writing style seems familiar in April’s newsletter you’ll know that AI has been deployed or my successor hasn’t been fully onboarded .

It’s been a blast, it really has! I’m satisfied with DINZ’s development under my tenure. No regrets. DINZ has punched way above its weight with minimal resources – only made possible by larger organisations whose support enabled volunteer members and supporters from organisations of all sizes to dedicate time and expertise to deliver our mahi on its mission. You know who you are, so thank you! 

Thanks also to the DINZ Executive Councils I have served over the years. It’s a largely unheralded gig, but governance in emerging ecosystems is crucial.

DINZ Updates

The DINZ Biometrics Special Interest Group submitted again on the Biometrics Code of Practice this month. Neither DINZ or NZTech are supportive of the code at this time, instead advocating for expert prepared Guidance to improve biometrics implementation best practice for all stakeholders, including privacy specialists who may not necessarily be biometric experts. By the time you read this, our webinar ‘Meet the DISTF Evaluators’ will be underway with over 100 registrations – the first educational session on this aspect of the regime as DINZ members enter the final week of free access to InformDI’s 7 Chapters DISTF online education suite.    DINZ members please complete the DISTF survey emailed to you by 14 April. The Next Gen Payments Consultation from DINZ member PaymentsNZ is due this Friday, so get yours in today!         


Industry Insights

There are some ‘green shoots’ appearing in Aotearoa New Zealand’s digital identity space, but we have years of catch-up ahead to regain our global leadership. We have to be more nimble, smart and collegial in all dimensions (detailed examples too much for the newsletter’s word count), but suffice to say that it’s Kiwi companies like MATTR, Authsignal, JNCTN, APLYiD, MyMahi and others that keep us on the map globally.

While taking more leisure time, I’m open to some advisory work leveraging my decades of knowledge, experience and contact networks in both local and international settings not often possible in this role. 

What’s Happening Around the Globe

As always, I’m sharing links to global news that resonated with me.

Passkey adoption is growing globally – no surprise to Authsignal who have secured business on nearly all continents in the past few weeks. This article argues the case for local biometrics software testing and resonates exactly with DINZ’s Kiwi faces dataset project that requires stakeholder support before it can proceed. DINZ member NEC takes top score in the offshore tests that are available. I found recent pundit Jamie Smith’s post on the fraud market quite thought provoking, although I don’t fully agree with him that Organisational Identity has been under a shadow. In NZ it’s not helped by agency responsibilities being split between MBIE and DIA. OI forms a major plank of Get Verified in the payments world but there are other worlds of course. That brings me to another Jamie Smith post on Edeleman’s Trust Barometer (in the comments) where Aotearoa’s own Āhau gets a mention.   And for those technical and following mDL this link and this link show just how quickly this space is evolving and maturing.

Final Thoughts

That’s it! Make sure you do something ‘identiful’ in April or attend the regional virtual catch ups on Identity Management Day and I’ll see you all soon out there in cyber.

Ngā mihi,

Colin Wallis
Executive Director, Digital Identity NZ

Farewells, new beginnings and the constant that is change

Read full news here: Farewells, new beginnings and the constant that is change

SUBSCRIBE FOR MORE

The post Farewells, new beginnings and the constant that is change appeared first on Digital Identity New Zealand.


Oasis Open

Call for Participation: Data Provenance Standards Technical Committee (DPS TC)

A new OASIS TC is being formed. The Data Provenance Standards (DPS) TC has been proposed by Cisco, the Data & Trust Alliance, IBM, Intel, Microsoft, Red Hat, and others listed in the charter below. The public TC homepage is here. All interested parties are welcome to join this TC. The eligibility requirements for becoming a participant […] The post Call for Participation: Data Prov

New TC aims to implement consistent tagging and metadata frameworks across data ecosystems—down to database, table, and column levels—to provide comprehensive data lineage and collection details tracking and support responsible data use, privacy, and compliance across all industries.

A new OASIS TC is being formed. The Data Provenance Standards (DPS) TC has been proposed by Cisco, the Data & Trust Alliance, IBM, Intel, Microsoft, Red Hat, and others listed in the charter below. The public TC homepage is here.

All interested parties are welcome to join this TC. The eligibility requirements for becoming a participant in the TC at the first meeting are:

You must be an employee or designee of an OASIS member organization or an individual member of OASIS, and You must join the TC, which members may do by using the Roster “join group” link on the TC’s web page or by clicking here.

To be considered a voting member at the first meeting:

You must join the TC at least 7 days prior to the first meeting on or before April 1, 2025; and You must attend the first meeting of the TC on April 8, 2025. Note: no work, including technical discussions or contributions, may occur prior to the first TC meeting.

Participants also may join the TC at a later time.

If your employer is already on the OASIS TC member roster, you may participate in DPS TC (or any of our TCs) at no additional cost. Find out how.

If your employer is not a member, we’re happy to help you join OASIS. Contact us to discuss your options for TC membership.

Please feel free to forward this announcement to any other appropriate lists. OASIS is an open standards organization; we encourage and welcome your participation.

CALL FOR PARTICIPATION

OASIS Data Provenance Standard Technical Committee Charter

The charter for this TC is as follows:

Section 1: TC Charter1.a. TC Name
Data Provenance Standards Technical Committee (DPS TC)1.b. Statement of Purpose
Provenance matters. We understand the sources of food, water, medicine, and capital-essential in our society to gauge quality and trust-and must now work to understand data, the fuel of our increasingly knowledge- and AI-centric world. For the purposes of this document and related TC efforts, provenance, pedigree, and lineage are recognized as distinct but interconnected concepts. The TC will prioritize early efforts to define how these terms-ranging from origin and history to granularity at the geographic, organizational, and individual levels-are scoped and applied to benefit all stakeholders. This will ensure comprehensive, practical, and actionable standards while mitigating ambiguity and scope constraints. 

Of course, building trust in data starts with transparency of provenance-assessing where data comes from, how it’s created, and whether it can be used legally. Yet, the ecosystem still needs a common language to provide that transparency. Establishing shared provenance standards is foundational to fostering trust in data and AI-driven systems. 

Over the past 18 months, the Data & Trust Alliance, in collaboration with industry organizations such as the EDM Council and AI Alliance, has worked to normalize and map its data provenance standards to existing initiatives while identifying practical adoption paths. For example, based on recommendations from the AI Alliance, the Data & Trust Alliance’s metadata framework has been integrated into Hugging Face model cards to promote provenance transparency in AI development. 

Using Version 1.0.0 of the Data Provenance Standards, defined by a working group of industry leaders from the Data & Trust Alliance, the OASIS Data Provenance Standards Technical Committee aims to advance data transparency, accountability, and trust by solidifying provenance standards into a universal data governance norm. 

This initiative will focus on implementing consistent tagging and metadata frameworks across data ecosystems-down to database, table, and column levels-to provide comprehensive data lineage and collection details tracking and support responsible data use, privacy, and compliance across all industries. The Committee will consider trust in data, ensuring that provenance, lineage, pedigree, and ultimately transparency support trust-building efforts in AI and data ecosystems. The Committee will consider existing trust models where relevant, ensuring alignment with industry best practices while remaining focused on provenance as a key enabler of trust. 

By establishing these standards, the Committee will enhance data life-cycle management, facilitate regulatory adherence, and reinforce trust in AI-driven and data-dependent applications. The Committee will also explore opportunities for integrating automated tools to generate and validate metadata, ensuring scalability and ease of adoption while maintaining trust and compliance. 

The goal is to create actionable standards that deliver measurable business value, such as enhanced operational efficiency and trust in AI systems, and to encourage adoption by demonstrating clear ROI for both data providers and consumers.1.c. Business Benefits 
It is expected that these standards will benefit all data and AI stakeholders, including:
-data suppliers (e.g., data producers, technology companies)-who will be able to deliver clear and consistent data lineage information, making their datasets more valuable and trustworthy. Compliance can combat piracy and misuse.
-data acquirers (e.g., data-driven organizations, regulatory bodies)-who will benefit from greater transparency and being better able to assess the reliability and intended usage of datasets and to request changes or reject data sets when necessary. Higher performing AI tools can be a direct outcome.
-end-users (consumers)-who will gain insight into how their data is managed and protected, and thus become more trusting in representative/non-biased data-driven solutions. 

These standards will:
-enable data suppliers to provide standardized, consistent metadata on data lineage and provenance
-support data acquirers in managing compliance and mitigating risks associated with data privacy, security, and intellectual property rights
-help end-users by ensuring transparency in data handling and increasing trust in digital services. 

The standards will be relevant to professionals across various domains, including: 
-data governance professionals (including legal and compliance stewards)
-IT and compliance officers
-AI and data scientists
-business and industry professionals who rely on trusted data for decision-making. 

Adoption will be driven by enterprise demand for metadata-tagged datasets that offer faster access, reduced compliance risks, and improved decision-making. The availability of automated tools for metadata tagging and validation will significantly lower adoption barriers and costs for data providers.1.d. Scope
The TC will develop cross-industry standards for defining data provenance, pedigree, lineage, and metadata-tagging frameworks. These will support tags at the database, table, and column levels, as well as metadata for graph databases, NoSQL databases, and data exchanged via APIs and other non-database structures. 

The scope includes creating guidelines and schemas for managing the data life cycle and tracking provenance, pedigree, and lineage across diverse data architectures and transmission methods. While highly domain-specific adaptations may require additional tailoring by industry groups, these standards are intended to provide a flexible foundation that is applicable across multiple sectors. 

Additionally, provenance-related geolocation metadata will encompass latitude/longitude, political/geographical boundaries, organizational context, and person-based attributes where relevant, supporting trust assessments based on data origin. 

The TC will also provide guidance on the development and integration of tools for automating metadata tagging, validation, and transformation, to ensure accuracy and compliance. The scope of these standards does not include tagging for misinformation, disinformation, or malinformation (“mis/dis/mal”); rather, such determinations are beyond provenance and are expected to be derived by users (e.g., AI/ML systems) externally to these specifications. 

The TC will prioritize datasets that are critical for AI/ML applications and enterprise use cases, balancing comprehensive tagging with practical implementation considerations.1.e. Deliverables 
Expected deliverables include: 
-Committee specifications for standardized data provenance tags
-Committee notes and/or guides on how to implement the standards
-supporting documentation such as glossaries, UML models, and metadata requirements documents
-guidelines for integrating tools to automate metadata tagging, validation, and life-cycle management
-additional deliverables as determined by the Technical Committee, such as reference implementations, case studies, interoperability frameworks, based on ongoing needs and industry developments, or a study on how the standards align and enable compliance (for AI providers) with transparency regulation in the AI space as well as the benefits of the standards to data providers. 

The Technical Committee aims to release initial drafts by mid-2025. This will be followed by public feedback phases and iterative refinements, with the goal of finalizing and publishing the standards by late 2025. Timelines may be adjusted based on industry input and the progress of Committee discussions.1.f. IPR Mode
Non-assertion1.g. Audience
Participants will include AI ethics and privacy specialists, data governance and compliance professionals, IT managers, and regulatory advisors from various industries, particularly finance, healthcare, and retail.1.h. Language
English(Optional References for Section 1)
Data & Trust Alliance website detailing work to date on data provenance standards 
GitHub repository with technical specifications
Standards Executive Briefing
IBM’s IBV report with results of data provenance standards testing – 58% reduction in data clearance processing time for third-party data and a 62% reduction in data clearance processing time for IBM-owned or generated data
Use cases –four key areas of practice to help with understanding and sharing the standards
Section 2: Additional Information2.a. Identification of Similar Work
Similar or related work includes: 
-NIST (https://www.nist.gov/itl/ai-risk-management-framework)
-EDM Council (https://edmcouncil.org/frameworks/cdmc/) with which the Data & Trust Alliance has collaborated and mapped to the CDMC; in the upcoming CDMC refresh we will have full alignment in our metadata
-MIT Media Lab (https://www.media.mit.edu/projects/data-provenance-for-ai/overview/) with which the Data & Trust Alliance has coordinated and determined that there are synergies but no duplication of effort
-W3C (https://www.w3.org/TR/prov-dm/) which is focused on web provenance; the Data & Trust Alliance has mapped its metadata to components of PROV, demonstrating minimal overlap
-complementary initiatives including the ISO standards for data management, the FAIR principles, and other industry-specific data governance frameworks
– the AI Alliance having adopted the standards for its definition of data trust
-OSIM (Open Supplychain Information Modeling (https://www.oasis-open.org/tc-osim/)-the framework for structuring and exchanging supply chain data, enabling interoperability, transparency, and efficiency across industries
-DAD-CDM (Common Data Model for Defending Against Deception, https://github.com/DAD-CDM) which provides a standardized data model for AI and data development, thus enhancing interoperability, consistency, and efficiency across diverse data ecosystems
-COSAI (https://www.coalitionforsecureai.org/) which is focused on developing and promoting security standards, best practices, and policies to ensure the safe and responsible development and deployment of AI technologies
-Apache Atlas (https://atlas.apache.org/) which provides metadata management and governance capabilities that align with the data provenance standards by enabling structured metadata tagging and lineage tracking across enterprise data ecosystems
-OpenLineage (https://openlineage.io/) which offers an open framework for capturing and standardizing data lineage, complementing the data provenance standards by ensuring transparency and traceability in data workflows
-Community Data License Agreement (CDLA) (https://cdla.dev) which offers collaborative licenses designed to facilitate the open sharing, access, and use of data among individuals and organizations.

The DPS TC will differentiate itself by creating a cross-industry standard that focuses on comprehensive data provenance, pedigree, and lineage tracking, responsible (from an IP and privacy perspective) AI use, and regulatory compliance support, filling a gap for generalizable and adaptable provenance standards.2.b. First TC Meeting
April 8, 2025 @ 1pm ET, via a virtual format2.c. Ongoing Meeting Schedule
Meetings will be held monthly.2.d. TC Proposers
Lisa Bobbitt, Cisco, lbobbitt@cisco.com
Kristina Podnar, Data & Trust Alliance kpodnar@dataandtrustalliance.org
Saira Jesani, Data & Trust Alliance sjesani@dataandtrustalliance.org
Asmae Mhassni, Intel, asmae.mhassni@intel.com
Kelsey Schulte, Intel, kelsey.schulte@intel.com
Mic Bowman, Intel, mic.bowman@intel.com
Peter Koen, Microsoft, jaywhite@microsoft.com
Babak Jahromi, Microsoft, babakj@microsoft.com
Jay White, Microsoft, jaywhite@microsoft.com
Stefan Hagen, Individual, stefan@hagen.link
Janaye Minter, NSA, vjminte@uwe.nsa.gov
Duncan Sparrell, SFractal, duncan@sfractal.com
Roman Zhukov, RedHat, rzhukov@redhat.com
Lee Cox, IBM, Lee.Cox@uk.ibm.com2.e. Primary Representatives’ Support 
I, Omar Santos, as OASIS primary representative for Cisco, confirm our support for the Data Provenance Standard TC and our participants listed above.
I, Kristina Podnar, as OASIS primary representative for Data & Trust Alliance, confirm our support for the Data Provenance Standard TC and our participants listed above.
I, Jeffrey Borek, as OASIS primary representative for IBM, confirm our support for the Data Provenance Standard TC and our participants listed above.
I, Dhinesh Manoharan, as OASIS primary representative for Intel, confirm our support for the Data Provenance Standard TC and our participants listed above.
I, Jay White, as OASIS primary representative for Microsoft, confirm our support for the Data Provenance Standard TC and our participants listed above.
I, Vincent Boyle, as OASIS primary representative for National Security Agency, confirm our support for the Data Provenance Standard TC and our participants listed above.
I, Mark Little, as OASIS primary representative for RedHat, confirm our support for the Data Provenance Standard TC and our participants listed above.2.f. TC Convener    
Kristina Podnar, Data & Trust Alliance, kpodnar@dataandtrustalliance.org2.g.  Anticipated Contributions
Standards Executive Briefing
GitHub repository with technical data provenance standards specifications, code snippets, documentation for standards adoption
Use cases – four key areas of practice to help with understanding and sharing the standards
Metadata generator – the TC will assess the feasibility of existing prototypes, such as the metadata generator, and recommend enhancements to align with the standards

The standards may serve as a precursor to broader frameworks like AI Bills of Materials (AI BOMs), enhancing traceability and compliance.2.h. FAQ Document
https://dataandtrustalliance.org/work/data-provenance-standards2.i.  Work Product Titles and Acronyms
Data Provenance Metadata Specification
Data Lineage Standard for AI Compliance
Data Transparency and Accountability Standards

The post Call for Participation: Data Provenance Standards Technical Committee (DPS TC) appeared first on OASIS Open.


The Engine Room

Re-homing the Cybersecurity Assessment Tool (CAT)

Beginning in 2025 The Engine Room will be hosting and maintaining the Cybersecurity Assessment Tool (CAT) in our support programming. The post Re-homing the Cybersecurity Assessment Tool (CAT) appeared first on The Engine Room.

Beginning in 2025 The Engine Room will be hosting and maintaining the Cybersecurity Assessment Tool (CAT) in our support programming.

The post Re-homing the Cybersecurity Assessment Tool (CAT) appeared first on The Engine Room.


GS1

Clinical Trials - Cloned

Clinical Trials - Cloned daniela.duarte… Wed, 03/26/2025 - 14:39 Clinical Trials - Cloned Download the current standard
Clinical Trials - Cloned daniela.duarte… Wed, 03/26/2025 - 14:39 Clinical Trials - Cloned Download the current standard

This standard that provides the technical support for the GS1 Pharmaceutical Clinical Trial Electronic Messaging Standard Implementation Guideline

See other GS1 XML standards in this 3.6 version

Clinical trials EDI semantic model and mapping

Any questions

We can help you get help you get started using the GS1 standards

Contact your local office

Get involved

Standards Maintenance Groups (SMGs) improve existing standards

List of working groups


We Are Open co-op

Reframing Recognition: Part 3

Demystifying Microcredentials Image CC BY-ND Visual Thinkery for WAO This is the third in a series of blog posts drawing on the insights from a report authored by WAO for the Irish National Digital Leadership Network (NDLN). The report explores the historical roots of credentialing, the emergence of microcredentials, and the opportunities they present for reshaping education and professional
Demystifying Microcredentials Image CC BY-ND Visual Thinkery for WAO

This is the third in a series of blog posts drawing on the insights from a report authored by WAO for the Irish National Digital Leadership Network (NDLN). The report explores the historical roots of credentialing, the emergence of microcredentials, and the opportunities they present for reshaping education and professional development.

Microcredentials are increasingly discussed as a solution to the gaps in traditional education and workforce development systems. However, their purpose and value can be misunderstood, leading to confusion among educators, employers, and learners. In this post, with Open Recognition as a guiding concept, we clarify what microcredentials are, address common misconceptions, and explore how they can meet diverse needs.

Part 1 — Introduction and Context Part 2 — The Evolution of Credentialing Part 3 — Demystifying Microcredentials (this post) Part 4 — Trends Shaping the Future of Microcredentials Part 5 — The Role of Technology in Microcredentialing Part 6 — Challenges and Risks in Microcredentialing Part 7 — A Vision for the Future of Microcredentials What Are Microcredentials?

At their core, microcredentials are forms of recognition that help to validate specific skills, knowledge, or behaviours. Unlike traditional degrees or diplomas, which often encompass years of study, microcredentials tend to be smaller, more modular and flexible. They can stand alone as markers of achievement or be combined to create larger qualifications, making them well-suited to today’s fast-changing education and employment landscapes.

Microcredentials are underpinned by principles of transparency and portability. Standards such as Open Badges ensure that each credential contains verifiable information about the skill achieved, the criteria for earning it, and the issuing organisation. This approach builds trust and enables learners to showcase their achievements across different platforms and contexts.

Addressing Common Misconceptions

One of the biggest misconceptions about microcredentials is that they are simply short courses. While some microcredentials are tied to short learning experiences, their true value lies in their ability to represent specific knowledge, skills, and behaviours in a way that is transparent and meaningful to employers and other stakeholders. Unlike traditional short courses, microcredentials include detailed evidence of achievement, such as project work or assessments, supported by standards like Open Badges, which ensure interoperability and trust.

Another misconception is the risk of ‘credential fatigue’ where learners are overwhelmed by the need to collect numerous credentials to prove their skills. Thoughtfully designed systems mitigate this risk by ensuring that microcredentials are stackable and aligned with broader qualification frameworks. This allows learners to build towards larger goals rather than accumulating isolated credentials.

Meeting Sector-Specific Needs

Microcredentials are well-positioned to tackle diverse challenges across various sectors. To maximise their effectiveness, they must adhere to certain key principles. They should be standards-based, utilising frameworks such as Open Badges or Verifiable Credentials (to be explored in a future post), which ensures their portability and interoperability.

The metadata included in a microcredential should clearly outline the skills achieved, the criteria met, and the issuing organisation. In addition, they must focus on specific, demonstrable knowledge, skills, and/or behaviours in a way that allows them to be ‘stacked’ towards larger qualifications. These features collectively ensure that microcredentials remain meaningful, portable, and impactful for both learners and organisations.

The principle of Open Recognition broadens the scope of microcredentials, valuing learning in all its forms — formal, informal, and experiential. By adopting Open Recognition, organisations can ensure that microcredentials are inclusive and accessible, celebrating a wide range of achievements. This approach enables credentials to reflect not only technical skills but also personal growth, community contributions, and soft skills.

To illustrate their versatility, here are examples of how microcredentials can address sector-specific needs. Microcredentials can:

Charities — Recognise skills gained through volunteer work, fundraising activities, and advocacy campaigns, helping participants showcase their contributions to broader causes. NGOs — Validate the skills of community workers and volunteers, making their contributions visible and valued. Co-ops — Highlight informal learning and collaboration, supporting shared growth and innovation. Businesses — Align organisational values, such as sustainability or diversity, enhancing staff development and retention. Higher Education — Offer targeted learning opportunities that complement traditional degree programmes, such as industry-specific certifications or co-curricular achievements.

Microcredentials are not simply a way of breaking up an existing course into more granular chunks. Instead, they allow organisations to think deeper and wider about the kind of knowledge, skills, and behaviours they wish to promote. This can be in collaboration with other providers, for example when Higher Education institutions work with businesses to ensure a talent pipeline.

Looking Ahead

In the next post, we’ll examine the trends shaping the future of microcredentials. From skills-based hiring to stackable learning pathways, we’ll explore how these developments are driving change and creating new opportunities in a range of sectors.

Reframing Recognition: Part 3 was originally published in We Are Open Co-op on Medium, where people are continuing the conversation by highlighting and responding to this story.


DIDAS

SIF/FIND Pathway 2035 – Digital Trust Infrastructure as a Cornerstone for Switzerland’s Financial Future

The media release is an official statement by the Swiss Federal Government announcing the strategic direction and implementation steps for digital identity and trust infrastructure. On 21 January 2025, the Swiss Federal Office of Justice (FOJ), in collaboration with the Federal Chancellery and the Federal Office of Information Technology and Telecommunications (FOITT), published an important ...

The media release is an official statement by the Swiss Federal Government announcing the strategic direction and implementation steps for digital identity and trust infrastructure.

On 21 January 2025, the Swiss Federal Office of Justice (FOJ), in collaboration with the Federal Chancellery and the Federal Office of Information Technology and Telecommunications (FOITT), published an important statement on the future of digital identity and trust infrastructure.

As DIDAS – the Swiss association for digital trust – we are proud to have actively contributed to this milestone. Through constructive consultation, cross-sector collaboration and the sharing of insights from our community, we helped shape the direction of this foundational work. A special thank you goes to our co-authors and all those who support this joint effort.

Digital trust infrastructure – such as the E-ID, verifiable credentials, and interoperable wallets – is essential for the digital resilience and competitiveness of Switzerland. These are not just technological building blocks, but key enablers of a sovereign, secure and citizen-centric digital society.

This milestone also directly connects with the recently published vision by FIND and the State Secretariat for International Finance (SIF):
👉 Pathway 2035: Unlocking Financial Innovation for Switzerland

The message is clear: Switzerland’s ability to lead in financial innovation depends on a trusted digital foundation. At DIDAS, we are committed to enabling this foundation – bringing together policy, technology, and ecosystem actors to reduce friction, build interoperability, and establish trust by design.

Because in the end, Digital Trust is not a nice-to-have – it is a strategic imperative for Switzerland’s future.

Digital Trust is now firmly positioned on Switzerland’s strategic, political, and economic agenda.

#DigitalTrust #EID #TrustInfrastructure #Switzerland #SIF #FIND #Pathway2035 #FinancialInnovation #DIDAS


Hyperledger Foundation

CREDEBL v2.0.0: Major Updates Across Platform, Studio, Credo-Controller, and Mobile SDK

We are excited to announce the release of CREDEBL v2.0.0, the first major update as an LF Decentralized Trust project. The new version is packed with significant enhancements, new features, and crucial fixes across our Platform, Studio, Credo-Controller, and Mobile SDK. This release improves security, functionality, and user experience while ensuring seamless integration with emerg

We are excited to announce the release of CREDEBL v2.0.0, the first major update as an LF Decentralized Trust project. The new version is packed with significant enhancements, new features, and crucial fixes across our Platform, Studio, Credo-Controller, and Mobile SDK. This release improves security, functionality, and user experience while ensuring seamless integration with emerging technologies. Here’s what’s new:


GS1

Clinical Trials

Clinical Trials daniela.duarte… Wed, 03/26/2025 - 12:27 Clinical Trials Download the current standard
Clinical Trials daniela.duarte… Wed, 03/26/2025 - 12:27 Clinical Trials Download the current standard

This standard that provides the technical support for the GS1 Pharmaceutical Clinical Trial Electronic Messaging Standard Implementation Guideline

See other GS1 XML standards in this 3.7 version

Clinical trials EDI semantic model and mapping

Any questions

We can help you get help you get started using the GS1 standards

Contact your local office

Get involved

Standards Maintenance Groups (SMGs) improve existing standards

List of working groups


ResofWorld

A former Meta employee reviews the new Facebook memoir

Sarah Wynn-Williams’ memoir is a courageous feat, but it glosses over her own indifference to warnings from policymakers, civil society, and internal teams outside the U.S. about serious harm to communities from Facebook.
Last week, I read Sarah Wynn-Williams’ Careless People and was reminded of Carl Jung’s Everyman archetype — someone who fears being left out, and often compromises their morals in order...

Xiaohongshu’s global pivot: The surprise winner of the TikTok ban wants to keep its new users

The China platform faces political risks and e-commerce challenges in its overseas push ahead of a possible IPO.
Xiaohongshu, the Chinese social media platform known as RedNote, has been forced to adjust its overseas strategy after unexpectedly attracting millions of so-called TikTok refugees. But working to retain —...

Next Level Supply Chain Podcast with GS1

Replay: Technology and Modern Food Safety with Darin Detwiler

Do you ever think about food safety when you sit down for a meal? It’s easy to take for granted, but behind every meal, strict standards and practices ensure the food we consume is safe. In this replay episode, we revisit our conversation with Darin Detwiler, Founder and CEO of Detwiler Consulting Group. Darin’s path to food safety is deeply personal, driven by the tragic loss of his son to E. c

Do you ever think about food safety when you sit down for a meal? It’s easy to take for granted, but behind every meal, strict standards and practices ensure the food we consume is safe.

In this replay episode, we revisit our conversation with Darin Detwiler, Founder and CEO of Detwiler Consulting Group. Darin’s path to food safety is deeply personal, driven by the tragic loss of his son to E. coli.

Darin shares how the food safety industry is adapting to technological advancements like data analytics, AI, and digital solutions while meeting the ongoing demand for consistent production. If you've ever wondered about the efforts behind keeping food safe, this episode provides an inside look at the evolving food safety landscape and how we can continue protecting consumers in a rapidly changing environment.

 

In this episode, you’ll learn:

How digital solutions like data analytics and blockchain balance long-term and short-term food safety goals

The need for courage in food safety leadership to proactively manage and prevent crises

The power of social media to help improve food safety and transparency

 

Jump into the conversation:

(00:00) Introducing Next Level Supply Chain

(00:45) What led Darin into the food safety industry

(04:05) What Detwiler Consulting Company offers

(09:20) New technology and trends in the food safety industry

(14:08) How Darin and his team use AI and evolve it

(16:39) Big failures that have taken place in the food safety industry

(23:47) Darin’s favorite technology at the moment

 

Connect with GS1 US:

Our website - www.gs1us.org

GS1 US on LinkedIn

 

Connect with the guest:

Darin Detwiler on LinkedIn

Check out Detwiler Consulting Group

Tuesday, 25. March 2025

EdgeSecure

Leveraging Grant Support to Unlock Funding Opportunities

The post Leveraging Grant Support to Unlock Funding Opportunities appeared first on NJEdge Inc.

To support the vision and growth of institutional libraries, Edge is excited to unveil EdgeScholar, an innovative member solution designed to empower institutional libraries by enhancing research, information literacy, and grant writing capabilities. Through these services, Edge members can discover ways to leverage Internet2 connectivity to boost research and collaboration, optimize library resources and operations, and enhance their libraries’ roles as a campus learning hub. For organizations seeking vital funding, EdgeScholar offers full-service grant writing, modular options for specific tasks, research services, report writing, and draft reviews. Edge provides expert guidance and tailored support throughout the grant writing process, ensuring that institutions are well-equipped to identify, apply for, and successfully manage grant opportunities.

“Edge acts as an extension of your team and can offer supplemental services and support that aligns with your organization. Wherever you are in the process and however much involvement you wish from Edge, we can tailor our approach to ensure you advance your grant initiatives successfully. You may seek a grant proposal review and editing or a larger grant strategy consultation and training—no matter where an organization is in the grant process, we are here to help institutions leverage our expertise and experience.”

— Joshua Gaul
AVP and Chief Digital Learning Officer, Edge

Capitalizing on Funding Opportunities
Grant funding opportunities are plentiful, with a wide range of resources available to support impactful research, institutional growth, and community engagement within higher education. These opportunities can significantly advance a university’s mission, from fostering innovative research initiatives to enhancing educational programs and addressing societal challenges. However, many institutions face significant resource constraints, often overburdened by the growing demand for support in navigating the complex landscape of grant applications, reporting requirements, and compliance. As a result, institutions may struggle to fully capitalize on available funding opportunities, potentially limiting their ability to drive meaningful change.

“Many sponsored programs offices are overwhelmed and do not have the resources, manpower, or time to find and fulfill grants,” says Laura M. Romeo, Ph.D., Director of Learning Innovation, Development, and Scholarship, Edge. “We want to fill that gap through flexible, hands-on support that will allow institutions to capitalize on more grant opportunities. Whether an organization needs help discovering potential funding sources, crafting a proposal, or aligning funding pursuits with long-term goals, Edge can guide the way.”

Joshua M. Gaul, Ed.D., Associate Vice President and Chief Digital Learning Officer, Edge, adds, “Edge acts as an extension of your team and can offer supplemental services and support that aligns with your organization. Wherever you are in the process and however much involvement you wish from Edge, we can tailor our approach to ensure you advance your grant initiatives successfully. You may seek a grant proposal review and editing or a larger grant strategy consultation and training—no matter where an organization is in the grant process, we are here to help institutions leverage our expertise and experience.”

Accessing Flexible Grant Support
Not new to grant writing, Romeo’s experience began as a classroom teacher and submitting grant proposals to fund projects and expand the learning experience. “I wanted to give my students opportunities that my school did not have the resources to provide and give the class access to literature and reading materials that could elevate the learning experience,” says Romeo. “During the grant process, I recognized the importance of ensuring the proposal and the funder’s priorities matched, as well as using clear language that aligned with the application and would capture the funder’s attention. More recently while working on my doctorate, I was on a team that secured a $1.9 million grant from the National Science Foundation (NSF) to start a longitudinal study for elementary school teachers to develop science educators and leaders within their learning communities. During the yearlong process, we partnered with a couple of companies for this particular grant. Being involved in all the steps of the grant writing process and understanding the ins and out of each component was an exceptional learning experience.”

Also well-versed in the grant process, Gaul has been working on a multi-year grant with Middlesex College and the other community colleges in New Jersey. “As part of the executive team, we are working on the trajectory of the grant and how to best manage the funds going forward,” shares Gaul. “While at SUNY Empire State, I was part of a team that received one of the few national grants awarded to develop a competency-based education program. We determined how to best tell our story, share our goals, and position ourselves to be the right recipient of the grant.”

Using their experiences and lessons learned, the Edge team understands not every institution will require the same level of support. “Our grant support services are highly adaptable and flexible,” explains Romeo. “This allows us to immediately meet the needs of an institution, align with their goals, and provide the precise support needed to help them succeed. Ultimately, we want to help strengthen their grant writing teams and improve the process, so they can go beyond securing one grant into a sustainable, long-term strategy for future funding.”

“Many sponsored programs offices are overwhelmed and do not have the resources, manpower, or time to find and fulfill grants. We want to fill that gap through flexible, hands-on support that will allow institutions to capitalize on more grant opportunities. Whether an organization needs help discovering potential funding sources, crafting a proposal, or aligning funding pursuits with long-term goals, Edge can guide the way.”

— Laura Romeo, Ph.D.
Director of Learning Innovation, Development, and Scholarship, Edge

Navigating Grants with Expertise
Among Edge’s full range of grant support services, organizations can pick and choose which benefits would be most effective in helping them navigate the process, stay on track, and avoid costly missteps. “For institutions who are unsure where to begin, our grant research and identification services can be very valuable in determining which funding opportunities to pursue,” says Romeo. “This part of the process can feel very overwhelming, so we simplify the steps and help you focus on the potential funding sources that best align with your institutional goals. When it comes to writing the proposal, crafting a compelling and clear story is critical. Our team of experts can review and refine proposals, ensuring they are polished, persuasive, and fully comply with grant guidelines.”

Edge’s grant support services also include budget development and helping guide institutions in creating a precise and realistic budget to include in their proposals. “Developing the budget can be tricky when trying to balance being competitive and realistic,” says Romeo. “We can help make sure the budget checks all the right boxes and supports both the organization and the funder’s goals.”

“At Edge, we believe your work is worthy of funding support,” adds Gaul. “Whether through federal and state initiatives or philanthropic foundations, the funding opportunities are abundant and waiting to be tapped. Since it’s a flexible, modular approach to grant support, an organization can select only the steps and services they need to capture the funding opportunities that can help fuel their success, achieve their goals, and make a lasting impact.”

To learn more about securing the expert guidance you require for your grant initiatives, visit njedge.net/solutions-overview/grantsupportservices.

The post Leveraging Grant Support to Unlock Funding Opportunities appeared first on NJEdge Inc.


Velocity Network

As we drive the adoption of globally mobile digital credentials, it’s critical to distinguish between Trust and Quality – two distinct but equally important challenges.

The post As we drive the adoption of globally mobile digital credentials, it’s critical to distinguish between Trust and Quality – two distinct but equally important challenges. appeared first on Velocity.

ResofWorld

Careem’s CEO wants to transform the ride-hailing giant into a “digital butler” for everyday life

Mudassir Sheikha, who co-founded the Middle East-based ride-hailing giant, on Careem’s new chapter, its recipe for winning local markets, and what Western companies get wrong in emerging economies.
Mudassir Sheikha likes to tell the story of Careem’s growth in chapters. The first began in 2012, when Sheikha and Magnus Olsson co-founded the company as a corporate car booking...

Monday, 24. March 2025

FIDO Alliance

White Paper: FIDO Alliance Guidance for U.S. Government Agency Deployment of FIDO Authentication

This document is intended to highlight areas where FIDO offers the best value to address U.S. Government use cases as an enhancement of existing infrastructure, while minimizing rework as U.S. […]

This document is intended to highlight areas where FIDO offers the best value to address U.S. Government use cases as an enhancement of existing infrastructure, while minimizing rework as U.S. Government Agencies advance their zero trust strategies with phishing-resistant authentication tied to enterprise identity as the foundation.

For any questions or comments, please contact feedback@fidoalliance.org.

Note this white paper has been revised – March 2025.

DOWNLOAD THE WHITE PAPER

Hyperledger Foundation

LF Decentralized Trust Mentorship Spotlight: VS Code Debugger Plugin for Hyperledger Fabric Chaincode

Introduction

Introduction


ResofWorld

A Trump H-1B crackdown could hit Big Tech hard, with Amazon suffering most

U.S. tech firms — Amazon, Google, Meta, Microsoft, and Apple — were the top 10 employers of H-1B recipients in 2024.
As U.S. President Donald Trump sends mixed signals on the H-1B visa program, Amazon could find itself in a tight spot. The $2 trillion tech giant has bagged the most...

How a Japanese entrepreneur built Ethiopia’s fastest-growing EV maker

Dodai plans to introduce motorcycles with exchangeable batteries as a strategy for growth.
In energy-starved Ethiopia, where electricity reaches less than half of the population, a local startup’s unconventional bet on battery-powered motorcycles has hit the spot. In less than a year of...

Friday, 21. March 2025

ResofWorld

Mexico’s first homegrown EV faces a bumpy road 

Project Olinia aims to roll out its first model in 2026 as a cheaper alternative to popular Chinese EVs, but faces several challenges.
Earlier this year, Mexican President Claudia Sheinbaum beamed as a rendering of  three boxy, compact electric vehicles was displayed on a screen during a media briefing in the capital city....

We Are Open co-op

Reframing Recognition: Part 2

The Evolution of Credentialing Image CC BY-ND Visual Thinkery for WAO This is the second in a series of blog posts draws on the insights from a report authored by We Are Open Co-op (WAO) for the Irish National Digital Leadership Network (NDLN) which explores the historical roots of credentialing, the emergence of microcredentials, and the opportunities they present for reshaping education and
The Evolution of Credentialing Image CC BY-ND Visual Thinkery for WAO

This is the second in a series of blog posts draws on the insights from a report authored by We Are Open Co-op (WAO) for the Irish National Digital Leadership Network (NDLN) which explores the historical roots of credentialing, the emergence of microcredentials, and the opportunities they present for reshaping education and professional development.

As we mentioned in the first post in this series, our guiding concept is Open Recognition, which we believe to be at the heart of all valuable microcredentialing initiatives.

Part 1 — Introduction and Context Part 2 — The Evolution of Credentialing (this post) Part 3 — Demystifying Microcredentials Part 4 — Trends Shaping the Future of Microcredentials Part 5 — The Role of Technology in Microcredentialing Part 6 — Challenges and Risks in Microcredentialing Part 7 — A Vision for the Future of Microcredentials From Ancient Systems to Professionalisation

Credentialing has a long history, evolving alongside societal changes to meet the growing need to recognise skills and learning. In Ancient Egypt, training systems validated authority and expertise, while over a thousand years ago, China’s imperial examination system was testing candidates. These systems laid the foundation for linking recognised skills to societal roles.

During the Middle Ages, guilds established structured systems to train and certify artisans. Apprenticeships emphasised hands-on learning and formal recognition, ensuring tradespeople demonstrated their competence before advancing. This model prioritised practical application, reflecting the needs of a skills-driven society.

The 19th century brought significant changes as industrialisation created demand for standardised education and professional qualifications. Universities and professional bodies formalised credentials, with degrees and certifications becoming essential markers of expertise in fields such as medicine, law, and engineering. These qualifications were designed to meet the need for consistent standards in rapidly expanding industries.

Lifelong Learning and the Emergence of Microcredentials

The 20th century saw a shift towards ongoing professional development, driven by rapid technological advances and changing workforce demands. Short courses and certifications became more prominent, offering flexible ways for individuals to update their skills. However, these credentials often lacked portability and alignment with broader qualifications, which limited their value.

Microcredentials build on this history, addressing gaps left by traditional systems. They are designed to be modular, stackable, and skills-focused, making them well-suited to today’s fast-changing educational and employment landscapes. Unlike short courses, microcredentials are typically standards-based, ensuring that they are both portable and meaningful across different contexts.

Aligning Microcredentials with Open Recognition

The principle of Open Recognition underpins the design of microcredentials, moving beyond formal qualifications to value diverse forms of learning.

“Open Recognition is the awareness and appreciation of talents, skills and aspirations in ways that go beyond credentialing. This includes recognising the rights of individuals, communities, and territories to apply their own labels and definitions. Their frameworks may be emergent and/or implicit.” (What is Open Recognition, anyway?)

This approach acknowledges informal, experiential, and community-based achievements, creating opportunities for broader inclusion.

For example:

Universities — Recognising learning outside the classroom, such as internships or co-curricular activities. NGOs — Validating the skills of volunteers and community workers through accessible, evidence-based credentials. Co-ops — Highlighting the informal knowledge gained by members, promoting collaboration and shared expertise. Charities — Celebrating the contributions of volunteers and staff, offering evidence-based recognition for their skills and efforts. Businesses — Demonstrating workforce adaptability by using microcredentials to validate and showcase employee skills, supporting professional growth and development.

The evolution of credentialing reflects an ongoing effort to address societal needs. Microcredentials represent the next stage, offering a flexible and inclusive way to recognise skills in diverse contexts. By integrating principles of Open Recognition, they have the potential to transform how learning is valued and applied across all sectors.

Looking Ahead

In the next post, we’ll help demystify microcredentials by addressing common misconceptions and exploring their potential to meet diverse needs. We’ll provide practical examples of how microcredentials can be implemented effectively in a variety of sectors.

Reframing Recognition: Part 2 was originally published in We Are Open Co-op on Medium, where people are continuing the conversation by highlighting and responding to this story.

Thursday, 20. March 2025

OpenID

Implementer’s Guide: FAPI 2.0 Final vs. Implementer’s Draft 2.0

Author: Dima Postnikov, Vice-Chair of OpenID Foundation and FAPI WG Member Contributors: Gail Hodges, Nat Sakimura, Ralph Bragg, Filip Skokan, Joseph Heenan. This article is also accessible on Medium. Introduction In a significant milestone for the global Open Banking and Open Data community, on February 22nd, the OpenID Foundation published and approved the new and […] The post Implementer’s Gu

Author: Dima Postnikov, Vice-Chair of OpenID Foundation and FAPI WG Member

Contributors: Gail Hodges, Nat Sakimura, Ralph Bragg, Filip Skokan, Joseph Heenan.

This article is also accessible on Medium.

Introduction

In a significant milestone for the global Open Banking and Open Data community, on February 22nd, the OpenID Foundation published and approved the new and final version of the FAPI 2 Security profile and its Attacker model.

FAPI 2.0 Security Profile

FAPI 2.0 Attacker Model

A Final Specification provides intellectual property protections to implementers of the specification and is not subject to further revision. 

OpenID Foundation’s FAPI working group and many invited experts have worked extensively on the specification to make it simpler, easier to understand, more secure, and more interoperable. This version has been through formal security analysis by the University of Stuttgart, and the certification test suite is being updated to reflect FAPI 2.0 Final. 

Since the last Implementer’s Draft, there have been many editorial changes: new introduction sections, formatting, corrected typos, section renumbering, updated references to sections of this document and other specifications, and added acknowledgments.

Note: The previous approved implementer’s draft can be found here.

Below is a detailed summary of the key changes that can impact existing FAPI 2.0 implementations. These changes should be reviewed by all implementers (live or planning to go live) who had built their implementations against the last implementer’s draft.

If you have questions about the specs, please direct them to the FAPI WG mailing list at openid-specs-fapi-owner@lists.openid.net. If you have certification questions, please direct them to certification@oidf.org.

Summary of changes between FAPI 2.0 Implementer’s Draft 2.0 and Final

The changes below are labeled AS if relevant for Authorization Servers, RP for Relying Parties and Clients, and Ecosystem for Ecosystem operators.

‘aud’ claim value in client authentication assertions. AS RP 

Why? This additional security control has been introduced as a result of the FAPI working group discussion after the specification’s formal security analysis.

Impact: If your ecosystem or implementation uses and/or supports client authentication assertions, e.g., private_key_jwt, this impacts your implementation (client or authorization server). 

The specification was updated to only allow the issuer identifier value as a value for ‘aud’. Authorization servers are required to enforce this behavior. Note: Previously, other values were allowed for interoperability purposes. 

Conformance tests will enforce this.

TLS security deferred to BCP195 AS

Why? Instead of repeating the content of BCP195, FAPI will refer all TLS security matters to BCP195.

Impact on Authorization Servers:

IETF periodically publishes BCP195. FAPI 2 implementers must comply with BCP195 changes within 12 months after publication.

Some ciphers have recently been removed from the recommended list. Please review BCP195 to ensure you are up to date.

Conformance tests will enforce this.

Elliptic curve keys’ length AS

Why: The FAPI2 specification aligns with TLS BCP requirements (https://www.rfc-editor.org/rfc/rfc9325.html ) and NIST Guidelines [NIST.SP.800-56A], which have been updated recently.

Impact:  Elliptic curve keys used by AS shall have a minimum length of 224 bits.

Conformance tests will enforce this minimum length.

Clock skew AS

Why: Clock skew causes many interoperability issues: even a few hundred milliseconds of clock skew can cause JWTs to be rejected by AS as “issued in the future.” This new specification provides additional guidance for dealing with clock skews.

Impact: Authorization servers are required to accept JWTs with an iat or nbf timestamp between 0 and 10 seconds in the future and to reject JWTs with an iat or nbf timestamp greater than 60 seconds in the future.

Conformance tests will test this. 

MTLS ecosystems AS RP Ecosystem

Why? Some ecosystems have implemented MTLS as an additional security control at the transport layer for all server-to-server endpoints requiring transmitting sensitive data. For example, private_key_jwt is sometimes used for client authentication in conjunction with MTLS connectivity.

FAPI 2 recognizes this existing deployment practice and provides additional guidance to improve interoperability.

Key impacts (for ecosystems that choose to use it):

Authorization server implementations may utilize mtls_endpoint_aliases authorization server metadata to provide a discovery mechanism for endpoints that might have both MTLS and non-MTLS endpoints;

New client metadata `use_mtls_endpoint_aliases` was introduced for clients to indicate that they will be using mutual-TLS endpoint aliases [@RFC8705] declared by the authorization server in its metadata even beyond the Mutual-TLS Client Authentication and Certificate-Bound Access Tokens use cases. Client implementations shall use client metadata use_mtls_endpoint_aliases, if present.

Conformance tests will test this. 

One-time use of `request_uri` AS

Why? Recent implementation experience of x2app with PAR  highlighted that some operating systems pre-load authorization URLs and, in some cases, invalidate `request_uri` before an intended recipient app can use it.

Impact: Authorization servers are provided with additional guidance on enforcing one-time use of `request_uri` values: enforcement should occur at the time of authorization(where an authenticated user is presented with the consent details), not at the point of loading an authorization page. Once a user has been presented with consent details, request_uri should be invalidated independently of what the user chooses to do (accept, reject, or ignore/close a browser window or an application).

Refresh token rotation prohibition Ecosystem

Why? Refresh token rotation has traditionally been used as a security control. Still, it is known to cause significant operational issues and user experience degradation, with clients losing access to their existing consents. FAPI2 doesn’t require this security control because of the use of confidential clients and sender-constrained access tokens.

Impact: Ecosystems using refresh token rotation should require Authorisation Servers to remove it unless used for infrastructure migration or other extraordinary circumstances.

Some other items we knew already but needed to clarify

Authorization servers only support confidential clients. Ecosystem

Authorization servers shall not support CORS for the authorization endpoint (clients are not accessing this endpoint correctly). AS

As accepted best practice, Authorization Servers should restrict the privileges associated with an access token to the minimum required for the particular application or use case. AS

Authorization servers should not allow clients to influence their client_id so that it can be mistaken for an end-user subject identifier (a new attack has been added to security considerations: Client Impersonating Resource Owner). AS Ecosystem

Clients shall only send `client_id` and `request_uri` request parameters to the authorization endpoint (all other authorization request parameters are sent via PAR (RFC9126). Clients will fail if they don’t conform to this behavior. AS will fail if they don’t ignore other request parameters sent to the authorization endpoint. AS RP Ecosystem

Clients are required to use `code` as the value for `response_type`. Used to say shall support the authorization code grant (`response_type=code` &  `grant_type=authorization_code`) described in [@!RFC6749]; AS RP Ecosystem

Clients shall generate the PKCE challenge specifically for each authorization request and securely bind the challenge to the client and the user agent in which the flow was started; RP

For interoperability reasons, clients using authorization code flow and OpenID Connect should not use nonce longer than 64 characters. RP

Additional guidance is provided to limit the impact of key compromises in the security considerations section. AS Ecosystem

OpenID Foundation

The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.

The post Implementer’s Guide: FAPI 2.0 Final vs. Implementer’s Draft 2.0 first appeared on OpenID Foundation.


Hyperledger Foundation

New Releases from Hiero Community for Integrating Identity Frameworks - Your Feedback Requested

We’re thrilled to announce the release of the Hedera Plugin for ACA-Py, an OpenWallet Foundation project, along with the Hiero DID SDK for Python, a subproject of LF Decentralized Trust’s Hiero! This marks a significant milestone in advancing decentralized identity solutions on the Hedera and other Hiero-based networks, integrating key open source identity frameworks to facilitate the d

We’re thrilled to announce the release of the Hedera Plugin for ACA-Py, an OpenWallet Foundation project, along with the Hiero DID SDK for Python, a subproject of LF Decentralized Trust’s Hiero! This marks a significant milestone in advancing decentralized identity solutions on the Hedera and other Hiero-based networks, integrating key open source identity frameworks to facilitate the development of privacy-preserving credentialing applications. This initiative also expands the Hiero ecosystem to embrace and better support Python Developers with a Native Python SDK.


ResofWorld

“I need to rethink my future”: Tech professionals on how they’re coping with Trump’s rapidly shifting immigration changes

Many immigrants are concerned about the impacts of changing policies and anti-immigration rhetoric.
Tech professionals around the world are on the edge as President Donald Trump and his administration impose a series of radical immigration measures. From attempting to deport foreign nationals who...

Screaming customers, unpaid workers: Inside the chaotic demise of Indian online delivery pioneer Dunzo

Dunzo was a trailblazer in India’s online delivery sector, but things went awry when it shifted from its core strength.
India’s booming quick-commerce sector saw a major casualty earlier this year. Dunzo, the first Indian startup to receive a direct investment from Google in 2017, shut down in January, leaving...

Wednesday, 19. March 2025

Internet Safety Labs (Me2B)

The Tao of Diversity, Equity, and Inclusivity

At ISL we pride ourselves on being dispassionate truth-seekers when it comes to assessing risky behaviors in technology. But this moment in history requires expressing our passion for truth and our core values more openly. What’s currently happening in the US is, by design, disruptive, disorienting, and discouraging for many. The calculated dehumanizing of trans […] The post The Tao of Diversity

At ISL we pride ourselves on being dispassionate truth-seekers when it comes to assessing risky behaviors in technology. But this moment in history requires expressing our passion for truth and our core values more openly.

What’s currently happening in the US is, by design, disruptive, disorienting, and discouraging for many. The calculated dehumanizing of trans people and immigrants coupled with the TESCREAL eugenics AI agenda elucidated by Timnit Gebru1 foretells a worrying, broader dehumanization campaign ahead. As does the ongoing systematic removal and censorship of any kind of diversity, trans rights or related language on government sites. These actions inexorably lead to the detention and removal of targeted people, which, sadly, has begun.

This moment calls for pausing, ignoring the external chaos and looking within, to our own guiding north stars and speaking out for the values within us that reflect deep truths. Deep truths such as: love and celebration of diversity being vital for both humankind’s and the planet’s survival; it’s certainly vital for the work that we do every day at ISL.

Diversity, equity, and inclusivity are the way. Any other exclusionary, reductive, oppressive, dehumanizing way is antithetical to the truths of human kinship and interdependence and denies the reality that healthy ecosystems need diversity to sustain and thrive.

ISL exposes safety risks inherent in tech when used as intended. In other words, we’ve been establishing reasonable safety standards for the behavior of software-driven products and services. But how can we contemplate “safety” without the necessary question: “for whom?”

Many people think that ISL is solely focused on the safety of tech used by children. It may surprise you to know that that’s not our mission. It’s the brunt of our currently funded work, but our corporate vision is:

“A world where all digital products are safe for humans and humankind.”

In other words, safety for all.

I take this moment to reaffirm ISL’s commitment to prioritizing and fostering conditions to create a truly diverse, inclusive, and equitable organization.

Footnotes:

https://firstmonday.org/ojs/index.php/fm/article/view/13636/11599

The post The Tao of Diversity, Equity, and Inclusivity appeared first on Internet Safety Labs.


DIF Blog

How to keep user data private while storing it in the cloud

By layering End-to-End Encryption (E2EE) on cloud storage services ourselves, we can ensure that our data is protected from unwanted access, even from the storage services themselves.

Recent developments in the UK have once again highlighted the importance of user-controlled encryption for protecting personal data. Due to government demands in the United Kingdom, Apple has recently announced that they have stopped offering end-to-end encrypted iCloud storage, Advanced Data Protection (ADP), to new users, and will disable the feature to all users in the country at an unknown point in the future.

What does this mean for you? 

In the short term it means that UK user data is less protected, and potentially vulnerable to misuse or breaches. Without end-to-end encryption (E2EE), data stored in iCloud is accessible to Apple and, by extension, potentially to government agencies and bad actors. In the long term it sets a dangerous precedent that Apple can turn off this crucial security feature for any group of users at any time, raising concerns about digital privacy worldwide.

What can we do about it?

The answer is relatively simple, we need to encrypt our own data. 

Relying on a third party to keep our personal data safe — even big names like Apple, Amazon, or Microsoft — will always carry risk. Policies change, and external governmental pressures can force providers to weaken security measures, leaving the user exposed.

By layering End-to-End Encryption (E2EE) on cloud storage services ourselves, we can ensure that our data is protected from unwanted access, even from the storage services themselves (in case they want to mine the content of our data).

And the best part? We can do it for free by using DIDComm!

What is DIDComm?

One powerful and open-source solution for encrypting data before cloud storage is the DIDComm protocol. Hosted by the Decentralized Identity Foundation (DIF), DIDComm provides a secure, private communication methodology built on Decentralized Identifiers (DIDs). 

When used to establish an end-to-end encrypted channel, DIDComm ensures that only essential delivery metadata remains in plaintext, while everything else — including the body, attachments, etc — is encrypted for intended recipients only. In practice, this means that we can use the DIDComm message format to encrypt files as they are prepared to be synchronized from a local folder to the cloud storage service and ensures that only the intended recipient (you) can access them.

How to use DIDComm for cloud encryption

Steve McCown, Chief Architect at Anonyome Labs, was the first to create a practical method for encrypting files with DIDComm before storing them into the cloud. His recent GitHub release provides a thorough breakdown of how it all works and offers a step-by-step guide.  

For those looking to dive deeper, 

Join one of our upcoming DIDComm user group calls Join the discussion on DIF’s Discord in the #didcomm-user-group channel

OpenID

Standardized, Fine-Grained Authorization Using OAuth 2 Grant Management and Rich Authorization Requests

Since 2018, the OpenID Foundation’s FAPI Working Group and the global community have been developing standards to support Open Banking and Open Data. In “Standardized and Fine-Grained Authorization with OAuth 2 Grant Management and Rich Authorization Requests,” Dima Postnikov (OIDF Vice Chairman) and Gail Hodges (OIDF Executive Director) lay out how implementations around the world […] The post

Since 2018, the OpenID Foundation’s FAPI Working Group and the global community have been developing standards to support Open Banking and Open Data. In “Standardized and Fine-Grained Authorization with OAuth 2 Grant Management and Rich Authorization Requests,” Dima Postnikov (OIDF Vice Chairman) and Gail Hodges (OIDF Executive Director) lay out how implementations around the world have contributed to improvements in the specifications over time and explain the FAPI WG recommendations related to OAuth2 Grant Management and Rich Authorization Requests (RAR) to enable fine-grained authorization.

Previously deployed ecosystems did not have an opportunity to use a standard-based approach in fine-grained authorization. This draft paper explains why the FAPI WG is encouraging new ecosystems to become early adopters of Grant Management and RAR.

What to do next:

Read the draft paper here.

Implementers interested in being early adopters of Grant Management and RAR should contact openid-specs-fapi-owner@lists.openid.net to discuss the next steps.

More broadly, the FAPI Working Group is open to the public, and anyone can contribute at no cost by signing a contribution agreement. To learn more about FAPI or the FAPI Working Group, visit https://openid.net/wg/fapi/, sign up for the mailing list, and attend WG meetings.

We also recommend that all current and new ecosystems join the newly established Ecosystem Community Group to help the OIDF community provide ongoing support for ecosystem leaders. To learn more about Ecosystem Community Group, visit https://openid.net/cg/ecosystem-support-community-group/, sign up for the mailing list, and attend CG meetings.

OpenID Foundation

The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more atopenid.net.

The post Standardized, Fine-Grained Authorization Using OAuth 2 Grant Management and Rich Authorization Requests first appeared on OpenID Foundation.


ResofWorld

How Tesla blew its lead

BYD took the global EV crown. Now Chinese rivals and local startups are taking over emerging markets.
Tesla is having a tough year. Once a world leader in the electric-vehicle market, the company’s sales across the U.S., China, and several European countries fell year on year in...

Tuesday, 18. March 2025

Hyperledger Foundation

Upcoming Elections for End User and Community Seats on the Hiero Technical Steering Committee (TSC)

We are excited to announce the upcoming elections for two vital positions on the Hiero Technical Steering Committee (TSC): the End User Seat and the Contributor Seat. These roles are instrumental in guiding the technical direction and ensuring the diverse representation of our community within the Hiero project. This special election will be held on May 7th until May 31st, 2025, to fill

We are excited to announce the upcoming elections for two vital positions on the Hiero Technical Steering Committee (TSC): the End User Seat and the Contributor Seat. These roles are instrumental in guiding the technical direction and ensuring the diverse representation of our community within the Hiero project. This special election will be held on May 7th until May 31st, 2025, to fill these two TSC seats. The nomination period is open from March 17th until April 30th, 2025. Please ensure your submissions are completed before this date.

Monday, 17. March 2025

FIDO Alliance

IT News: Over 200,000 myGov users disable passwords in passkey shift

New figures reveal that over 200,000 users of myGov password stopped using passwords in favour of exclusively using passkeys as their login method by the end of last year.

New figures reveal that over 200,000 users of myGov password stopped using passwords in favour of exclusively using passkeys as their login method by the end of last year.


Hyperledger Foundation

Developer Showcase Series: Bruno Nascimento, Staff Engineer, Cheesecake Labs

Back to our Developer Showcase Series to learn what developers in the real world are doing with LF Decentralized Trust technologies. Next up is Bruno Nascimento, a staff engineer at Cheesecake Labs. 

Back to our Developer Showcase Series to learn what developers in the real world are doing with LF Decentralized Trust technologies. Next up is Bruno Nascimento, a staff engineer at Cheesecake Labs


We Are Open co-op

Reframing Recognition: Part 1

Introduction and Context Image CC BY-ND Visual Thinkery for WAO We Are Open Co-op (WAO) recently authored a report for the Irish National Digital Leadership Network (NDLN) on New Learning and Teaching Models. Our focus was on the transformative potential of micro and digital credentialing in recognising learning and achievements. Central to the approach we took is the concept of Open Rec
Introduction and Context Image CC BY-ND Visual Thinkery for WAO

We Are Open Co-op (WAO) recently authored a report for the Irish National Digital Leadership Network (NDLN) on New Learning and Teaching Models. Our focus was on the transformative potential of micro and digital credentialing in recognising learning and achievements. Central to the approach we took is the concept of Open Recognition, which seeks to acknowledge skills, talents, and experiences in ways that are inclusive and accessible.

Microcredentials has emerged as a popular term to refer to digital credentials including Open Badges and Verifiable Credentials. Confusingly, however, ‘microcredentials’ is sometimes used to simply mean a short online course. When approached thoughtfully, microcredentials are a flexible way to address skills gaps, support lifelong learning, and enhance employability. Across different sectors, from universities and businesses to NGOs and co-ops, they offer the potential to modernise how learning is recognised and celebrated. By focusing on inclusivity, transparency, and learner control, microcredentials align with the growing demand for education systems that reflect diverse needs and experiences.

This blog series draws on the insights from our NDLN report, exploring the historical roots of credentialing, the emergence of microcredentials, and the opportunities they present for reshaping education and professional development. Along the way, we’ll examine how microcredentials can address sector-specific challenges and offer practical steps for implementation.

What to Expect from This Series

Each post in this series will explore a key aspect of microcredentials, building on the foundations of Open Recognition and highlighting practical applications:

Introduction and Context (this post) — Setting the scene for microcredentials, their principles, and potential across sectors. The Evolution of Credentialing — Tracing the journey from ancient recognition systems to modern microcredentials. Demystifying Microcredentials — Clarifying what microcredentials are, addressing misconceptions, and exploring best practices. Trends Shaping the Future of Microcredentials — Highlighting key developments, such as skills-based hiring and stackable learning pathways. The Role of Technology in Microcredentialing — Exploring the digital tools that make microcredentials possible, from open standards to AI. Challenges and Risks in Microcredentialing — Examining barriers to adoption, including equity, quality assurance, and privacy concerns. A Vision for the Future of Microcredentials — Outlining actionable steps for organisations to harness microcredentials effectively. Why Open Recognition Matters

At the heart of microcredentialing is recognition. Unlike more traditional approaches to credentialing, Open Recognition values learning in all its forms — formal, informal, and experiential. This principle underpins the idea that credentials should not only validate achievements but also inspire confidence, celebrate diversity, and enable lifelong learning.

What might this look like in practice?

Charities — Acknowledging the efforts and skills of staff and volunteers, highlighting their contributions to social causes. NGOs — Validating the contributions of community workers and volunteers. Co-ops — Recognising the informal skills and knowledge gained through collaboration. Businesses — Verifying and showcasing employee skills, enhancing workforce development and adaptability. Higher Education — Offering microcredentials that align with degree programmes while recognising non-traditional learning paths.

Across all sectors, Open Recognition provides a lens for designing systems that are inclusive and meaningful.

Looking Ahead

In the next post, we’ll explore the evolution of credentialing, showing the rich history on which microcredentials build, and re-introducing the concept of Open Recognition. By examining how credentialing has evolved to meet societal needs, we’ll better understand the opportunities and challenges of the present moment.

Stay tuned for practical insights, examples, and strategies to help your organisation embrace microcredentials — whatever sector you’re working in.

Reframing Recognition: Part 1 was originally published in We Are Open Co-op on Medium, where people are continuing the conversation by highlighting and responding to this story.


FIDO Alliance

Mobile ID World: VicRoads Implements Passkeys Authentication System for Enhanced Digital Security

VicRoads, Victoria’s road transport authority, has implemented a passkeys authentication system as part of its digital security enhancement initiative, marking a significant step in Australia’s broader transition toward advanced digital […]

VicRoads, Victoria’s road transport authority, has implemented a passkeys authentication system as part of its digital security enhancement initiative, marking a significant step in Australia’s broader transition toward advanced digital identity solutions. The new system moves away from traditional password-based authentication methods toward a more secure passwordless approach, following similar changes by major technology providers like Microsoft in recent months.


The Payers: Fime secures FIDO IDV certification for identity verification

Fime’s testing laboratories in both EMEA and Taiwan have obtained full accreditation under the FIDO Alliance Identity Verification (IDV) Certification Programme. This certification allows the company to assess and validate identity verification […]

Fime’s testing laboratories in both EMEA and Taiwan have obtained full accreditation under the FIDO Alliance Identity Verification (IDV) Certification Programme.

This certification allows the company to assess and validate identity verification vendors’ Document Authenticity and Face Verification solutions, contributing to fraud prevention efforts while ensuring compliance with industry standards.

Growing concerns over deepfakes drive standardisation 

The introduction of FIDO’s IDV Programme comes in the context of increasing concerns about AI-driven fraud. According to the official press release, despite over 70 billion digital identity verification checks conducted in 2024, more than half of users remain worried about the risks posed by deepfakes and other fraudulent activities. The programme establishes a unified accreditation process to ensure remote identity verification solutions are secure and resistant to manipulation. 

A representative from Fime stated that remote identity verification is essential for sectors such as banking and digital ID enrolment, given the rapid advancements in deepfake technology. The official highlighted the importance of FIDO IDV Certification in helping service providers ensure that their vendors deliver reliable, validated solutions capable of protecting users and mitigating risk. 

Officials from the FIDO Alliance emphasised that the certification programme is designed to strengthen security during onboarding and enrolment processes. They noted that, alongside biometric component certification, this initiative aims to reduce reliance on traditional passwords while enhancing security and user experience.


Help Net Security: Goodbye passwords? Enterprises ramping up passkey adoption

87% of companies have, or are in the midst of, rolling out passkeys with goals tied to improved user experience, enhanced security, and compliance, according to the FIDO Alliance. Key findings Enterprises […]

87% of companies have, or are in the midst of, rolling out passkeys with goals tied to improved user experience, enhanced security, and compliance, according to the FIDO Alliance.

Key findings

Enterprises understand the value of passkeys for workforce sign-ins. Most decision makers (87%) report deploying passkeys at their companies. Of these, 47% report rolling out a mix of device-bound passkeys (on physical security keys and/or cards) and synced passkeys (synced securely across the user’s devices).

Organizations are prioritizing passkey rollouts to users with access to sensitive data and applications, including the three most commonly cited priority groups: Those requiring access to IP (39%), users with admin accounts (39%), and users at the executive level (34%). Organizations leverage communication, training, and documentation within these deployments to increase adoption.

Passkey deployments are linked to significant security and business benefits. Respondents report moderate to strong positive impacts on user experience (82%), security (90%), help center call reduction (77%), productivity (73%), and digital transformation goals (83%).

Groups that do not have active passkey projects cite complexity (43%), costs (33%), and lack of clarity (29%) about implementation as reasons. This signals a need for increased education for enterprises on rollout strategies to reduce concerns, as there is a correlation between these perceived challenges and the proven benefits of passkeys.

Saturday, 15. March 2025

EdgeSecure

EdgeCon Autumn 2025

October 10, 2024 at Kean University The post EdgeCon Autumn 2025 appeared first on NJEdge Inc.

Date: October 2, 2025
Location: Rider University
Time: 9 a.m.-5 p.m.
Attendee Ticket: $49

Event Location:
Rider University

Conference proceedings information and submission options will be available soon.

The post EdgeCon Autumn 2025 appeared first on NJEdge Inc.

Thursday, 13. March 2025

FIDO Alliance

Get With IT Podcast: The State of Passkey Adoption

In this episode, Jenna Barron interviews Andrew Shikiar, CEO and executive director of FIDO Alliance. They discuss the state of passkey adoption in the industry today and how organizations can […]

In this episode, Jenna Barron interviews Andrew Shikiar, CEO and executive director of FIDO Alliance. They discuss the state of passkey adoption in the industry today and how organizations can prepare for adopting them.

Key talking points include:

Why passkeys are more secure than passwords How widespread their adoption is  Ways organizations can prepare for broader passkey adoption

Visit Passkey Central for more resources on passkeys: https://www.passkeycentral.org/home 

Wednesday, 12. March 2025

Digital ID for Canadians

The Digital ID and Authentication Council of Canada (DIACC) Written Submission for the Pre-Budget Consultations (March 2025)

Submitted by: Joni Brennan, President List of recommendations Recommendation 1: That the government prioritize digital trust in four areas critical to Canada’s leadership and the…

Submitted by: Joni Brennan, President

List of recommendations

Recommendation 1: That the government prioritize digital trust in four areas critical to Canada’s leadership and the privacy, security and protection of our people and industries, including:

Digital Trust in Small and Medium Sized Businesses (SMBs) and E-Commerce; Digital Trust in Finance and Regulatory; Digital Trust in Public Sector Modernization and Citizen Services; and Digital Trust in Public Safety.

Recommendation 2: That the government recognize the necessity of embracing and prioritizing verification and authentication tools as part of its AI strategy.

Recommendation 3: That the government allocate the funding needed to support the adoption of digital trust tools to the benefit of government, businesses, and citizens alike.

Introduction

In today’s geopolitical and economic climate, Canada needs to urgently act to maximize economic security, growth and productivity — all of which depend on a foundation of trust. In an era where digital transactions drive commerce, investment, and public services, ensuring the authenticity of identities, data, and financial interactions is essential for stability and long-term success.

Without secure and privacy-respecting verification, businesses face higher fraud risks, increased compliance costs, and reduced consumer confidence. Investors and trading partners demand transparent, verifiable transactions and economic resilience, which depend on our ability to safeguard financial systems, facilitate secure trade, and unlock the full potential of AI-driven innovation. Strong verification systems are also key to removing barriers to interprovincial and international trade, ensuring Canadian businesses can compete in global markets with trusted credentials.

Yet, new threats to economic stability are emerging at an unprecedented pace. The spread of misinformation, AI-generated fraud, and identity theft undermines business operations, weakens consumer confidence, and creates vulnerabilities in financial markets. AI enables the rapid manipulation of information and identities, making it more difficult for organizations to verify legitimacy and protect against fraud.

Without urgent action, these challenges will erode trust, slow economic growth, disrupt financial systems, and weaken Canada’s competitive position. Labour mobility is also at risk—without trusted digital credentials, skilled professionals such as doctors, engineers, and tradespeople face delays in moving where needed most, affecting both businesses and public services.

Canada can drive economic security, labour mobility and digital trust by strengthening identity verification, authentication, and fraud prevention measures. By prioritizing trust as a national asset, we can enhance economic competitiveness, attract investment, and build a future where innovation thrives in a secure and resilient digital environment.

About DIACC

The Digital Identification and Authentication Council of Canada (DIACC) was created following the federal government’s Task Force for the Payments System Review, with a goal to bring together public and private sector partners in developing a safe and secure digital ecosystem.

DIACC is committed to accelerating digital trust adoption and reducing information authenticity uncertainty by certifying services against its Pan-Canadian Trust Framework — a risk mitigation and assurance framework developed collaboratively by public and private sector experts that signals trustworthy design rooted in security, privacy, inclusivity, accessibility, and accountability.

Recommendations

Against this backdrop, DIACC offers three recommendations for the federal government:

Recommendation 1: That the government prioritize digital trust in four areas critical to Canada’s leadership and the privacy, security and protection of our people and industries, including:

Digital Trust in Small and Medium Sized Businesses (SMBs) and E-Commerce; Digital Trust in Finance and Regulatory; Digital Trust in Public Sector Modernization and Citizen Services; and Digital Trust in Public Safety.

Digital Trust in Small and Medium Sized Businesses (SMBs) and E-Commerce

Canada’s e-commerce sector is growing faster than ever due to emerging technology and changing customer habits. While this creates significant opportunities, it also presents challenges for small and medium businesses (SMBs), their partners, and customers. With a significant amount of business happening online, SMBs must navigate a growing competitive landscape of online security risks and earn customer trust to help unlock interprovincial and international growth opportunities. By prioritizing digital trust, Canada can foster a robust e-commerce environment that empowers SMBs, enhances consumer confidence, and boosts economic growth. Interoperable frameworks such as the DIACC Pan-Canadian Trust Framework (PCTF) foster digital trust by protecting personal electronic information as it travels across an organization, ensuring that e-commerce systems remain secure, adaptable, and trusted.

By prioritizing digital trust and implementing authentication and verification tools, the government can help drive the following benefits:

enhanced customer trust and loyalty; streamlined business processes by automating identity verification and reducing the need for manual checks; faster, more efficient operations and reduced administrative costs, allowing businesses to allocate resources more effectively; data minimization and the secure handling of personal information, increasing customer confidence; a competitive advantage for Canada’s SMBs by helping them innovate and offer their customers new, secure digital services; and a reduction in incidents of fraud, resulting in significant cost savings for businesses. These savings can be reinvested into other business areas, driving growth and innovation and improving overall business performance.

Digital Trust in Finance and Regulatory

The finance and regulatory sector is undergoing rapid digital transformation. While the industry pioneers new technology and moves away from conventional platforms, it faces rising fraud, privacy breaches, and growing consumer skepticism fueled by misinformation, disinformation, and challenges verifying information in an AI-driven world. As a result, the government is encouraged to build on the existing regulatory framework and develop new regulations to facilitate secure digital transactions, including compliance with anti-money laundering (AML) and know-your-customer (KYC) regulations.

Further, digital trust and verification services will be critical as the government moves forward with its commitments to open-banking, with interoperability also being paramount as the federal framework and existing provincial frameworks work together. Similarly, the government has committed to reducing incidents of mortgage fraud and strengthening proof of borrower and title insurance, and digital trust and verification services can and should play a critical role in making that commitment a reality.

By prioritizing digital trust, Canada can secure its financial systems and enhance competitiveness in the global economy. Interoperable frameworks like the DIACC Pan-Canadian Trust Framework (PCTF) ensure systems remain resilient, adaptable, and trusted.

Digital Trust in Public Sector Modernization and Citizen Services

Public services are undergoing rapid digital transformation, adopting new technologies to improve efficiency and accessibility. At the same time, they face significant challenges and barriers including data security risks, privacy concerns, and public skepticism fueled by misinformation.

As public services continue to move online, digital trust and verification services will be critical for ensuring that services are secure and accessible. From online healthcare consultations to digital government services, these technologies provide the necessary security infrastructure to protect public interactions and data.

By implementing digital trust solutions, the federal government will be able to provide secure, user-friendly online access to services; streamline identity verification for faster service delivery; facilitate seamless data sharing between agencies; reduce administrative burdens and operational costs; and improve service delivery times and citizen satisfaction.

Digital Trust in Public Safety

The public safety sector is undergoing rapid digital transformation, embracing new technologies to enhance emergency response, law enforcement, and disaster management. However, this shift also brings challenges such as data security risks, privacy concerns, and the need for reliable information verification in critical situations.

Implementing robust digital trust solutions can significantly improve emergency response by enabling secure, real-time data sharing between agencies; verifying the authenticity of emergency communications; and facilitating rapid and accurate identification of individuals in crisis situations.

Public safety agencies are encouraged to leverage technologies such as AI and blockchain to enhance their digital trust capabilities and improve emergency response. AI can be used for real-time data analysis and decision-making, while blockchain can ensure the integrity and immutability of critical information.

DIACC encourages collaboration between public safety agencies, technology providers, and other stakeholders to develop standardized digital trust practices, and interoperable frameworks like the DIACC Pan-Canadian Trust Framework (PCTF) ensure that public safety systems remain secure, adaptable, and trusted.

Together we can create a public safety ecosystem that leverages digital trust to protect citizens, respects privacy, and solidifies Canada’s position as a secure and effective emergency management leader.

Recommendation 2: That the government recognize the necessity of embracing and prioritizing verification and authentication tools as part of its AI strategy.

In today’s world, where AI is becoming smarter every day, and information can be generated and manipulated at unprecedented speed and scale, ensuring the accuracy and trustworthiness of information is critical. It is vital to maximize the benefits of an AI and Artificial General Intelligence (AGI)-fueled data ecosystem for Canada while also fostering citizen trust and protecting their safety.

To effectively address the challenges we’re facing while realizing the benefits of AI, the federal government should prioritize verification and authentication tools as part of its broader AI strategy. Prioritization must include funding, collaboration, and urgent action to support the development, adoption and certification of tools that verify information authenticity while protecting privacy and empowering Canadians. Governments, banks, telcos, tech companies, media organizations, and civil society must work together to deploy open, standards-based solutions and services to verify the authenticity of information.

The economic imperative of investing in these capabilities is clear. According to a study by Deloitte, the Canadian economy could unlock an additional 7 per cent (CAD $7 trillion) in economic value through AI and AGI technologies. People and organizations can only realize this potential for the good of society by investing in tools, processes, and policies that support verifying the authenticity of the information generated and processed by AI and AGI technologies.

Recommendation 3: That the government allocate the funding needed to support the adoption of digital trust tools to the benefit of government, businesses, and citizens alike.

Today, solutions can signal verified trust by getting certified against a technology-neutral risk and assurance framework like DIACC’s Pan-Canadian Trust Framework, developed collaboratively by public and private sector experts.

Verifiable information authenticity relies on critical principles, including provenance and traceability: provenance establishes the origin and history of information, ensuring it comes from a reliable source, while traceability allows for audibility of the flow of information, enabling people, businesses, and governments to verify its accuracy and authenticity. These principles are essential in combating the spread of misinformation and disinformation, which can have far-reaching consequences in an AI-fueled world.

Provenance and traceability are potent information authenticity tools that can help:

businesses and professionals reduce liabilities and meet obligations to verify information about their clients and their operations; citizens and residents interact securely and efficiently with governments; customers and clients transact with privacy and security anywhere, anytime; industries manage decision-making and securely supply chains using trusted data; producers verify essential data related to environmental, safety, and operational goals and creators track intellectual property to ensure fair payment and cultural protection.

Conclusion

A proactive approach—rooted in collaboration between government, industry, and technology leaders—will ensure that Canada remains a trusted hub for global trade, seamless labour mobility, and secure financial transactions. We can unlock new economic opportunities, strengthen international partnerships, and fuel long-term prosperity by enabling frictionless and verifiable trade, business, and employment interactions.

Thank you once again for the opportunity to provide our input in advance of Budget 2025 and as we collectively move forward on the path to a digitally and economically prosperous Canada.


FIDO Alliance

Fime supports fight against identity fraud with FIDO ID verification accreditations

Fime has achieved full  FIDO Alliance Identity Verification (IDV) Certification Program accreditation across multiple regions. Both the Fime EMEA and Fime Taiwan testing laboratories can now support identity verification vendors in certifying their Document […]

Fime has achieved full  FIDO Alliance Identity Verification (IDV) Certification Program accreditation across multiple regions. Both the Fime EMEA and Fime Taiwan testing laboratories can now support identity verification vendors in certifying their Document Authenticity and Face Verification solutions, helping combat fraud while enhancing the user experience.

With over 70 billion digital identity verification checks conducted in 2024, a reported 52% of people are still concerned about deepfakes and AI-driven fraud. To address this, FIDO introduced the IDV Program, providing a standardized accreditation that ensures remote digital identity verification solutions are secure, reliable, and fraud resistant. 


Next Level Supply Chain Podcast with GS1

Sustainable Threads: The Impact of Sustainability Certifications in Apparel

The apparel industry is working to be more sustainable, but verifying those claims is complicated. With over 70 certifications and no standardized way to share data, brands and retailers struggle to track sustainability efforts efficiently. In this episode, Amy Reiter, Senior Director of Customer Success for the Apparel and General Merchandise Initiative at GS1 US, joins hosts Reid Jackson and L

The apparel industry is working to be more sustainable, but verifying those claims is complicated. With over 70 certifications and no standardized way to share data, brands and retailers struggle to track sustainability efforts efficiently.

In this episode, Amy Reiter, Senior Director of Customer Success for the Apparel and General Merchandise Initiative at GS1 US, joins hosts Reid Jackson and Liz Sertl for a conversation on the challenges of sustainability certifications in apparel. Many companies still rely on PDFs and spreadsheets to verify organic cotton claims and responsible manufacturing. GS1 US is working to change that by improving data-sharing processes and exploring how GTINs (Global Trade Item Numbers) can streamline certification tracking.

Tune in to learn how the industry is tackling sustainability verification and what it means for brands, retailers, and consumers alike.

 

In this episode, you’ll learn:

Why GTINs are critical for accurate sustainability claims

How brands and retailers can replace inefficient certification tracking

The growing role of machine-readable data in product transparency

 

Jump into the conversation:

(00:00) Sustainability Journey at GS1 US

(03:16) Streamlining Textile Certification Data

(06:51) GTIN Certification Process Overview

(10:22) Gen Z Drives Eco-Label Awareness

(13:04) Seamless Machine-Readable Data Sharing

(18:28) Sustainability Practices in Business

 

Connect with GS1 US:

Our website - www.gs1us.org

GS1 US on LinkedIn

 

Connect with the guest:

Amy Reiter on LinkedIn

Tuesday, 11. March 2025

Digital ID for Canadians

Advancing Digital Trust to Fuel E-Commerce Growth and Empower Small and Medium-Sized Businesses

March 11, 2025 Current Landscape Canada’s e-commerce sector is growing faster than ever due to emerging technology and changing customer habits. While this creates significant…

March 11, 2025

Current Landscape

Canada’s e-commerce sector is growing faster than ever due to emerging technology and changing customer habits. While this creates significant opportunities, it also presents challenges for small and medium businesses (SMBs), their partners, and customers. With significant business happening online, SMBs must navigate a growing competitive landscape of online security risks and earn customer trust to help unlock interprovincial and international growth opportunities.

When DIACC was established in 2012, its mission was to create a secure digital ecosystem. Today, this goal has become even more critical for the e-commerce sector, particularly for SMBs striving to scale up and remain competitive in a global market.

Digital trust, which empowers individuals, governments, and businesses with secure and transparent ways to engage online confidently, has become critical for businesses that want to assure customers that their interactions and personal data are secure.

By prioritizing digital trust, Canada can foster a robust e-commerce environment that empowers SMBs, enhances consumer confidence, and boosts economic growth. Interoperable frameworks such as the DIACC Pan-Canadian Trust Framework™ (PCTF) foster digital trust by protecting personal electronic information as it travels across an organization, ensuring that e-commerce systems remain secure, adaptable, and trusted.

Advancing Digital Trust to Fuel E-Commerce Growth and Empower SMBs 1. Strengthening SMB Competitiveness and Growth

Implementing robust digital trust solutions is crucial for SMBs to compete in e-commerce. By adopting these technologies, SMBs can:

Enhance customer trust and loyalty Reduce fraud-related losses Streamline operations and reduce costs Expand into new markets more confidently 2. Enhancing Trust Through the DIACC PCTF

DIACC encourages e-commerce businesses to adopt the PCTF as a tool to:

Implement secure and efficient customer onboarding processes Authenticate identities to reduce fraud in online transactions Improve supply chain management through verified digital credentials 3. Fostering Consumer Confidence

To address consumer skepticism and promote trust in e-commerce platforms, we recommend:

Implementing clear, user-friendly privacy policies Adopting visible trust signals, such as PCTF certification badges Providing transparent data handling practices 4. Enabling Seamless Cross-Border Transactions

Digital trust frameworks can help SMBs expand into other Canadian provinces and internationally by:

Facilitating secure cross-border identity verification Ensuring compliance with various regional regulations Building trust with customers and partners across Canada and internationally 5. Leveraging Digital Trust for Innovation

SMBs can use digital trust solutions to:

Implement personalized shopping experiences that are secure and privacy-respecting Develop trusted AI-powered customer service tools Create innovative loyalty programs based on verified identity information Best Practices and the Way Forward 1. Adopt Existing and Emerging Technologies

SMBs should leverage existing and emerging digital trust solutions that align with the PCTF. PCTF certification fosters verified trust across smart devices, digital credentials, wallets, and information-sharing networks. It will enhance their capabilities and ensure their competitiveness.

2. Collaborate for Standardization

DIACC encourages collaboration between SMBs, larger enterprises, and regulators to establish standardized digital trust practices in e-commerce.

3. Educate and Empower

DIACC is committed to educating SMBs and consumers about digital trust through:

Hosting sector-specific workshops and certifications to promote best practices in digital trust Real-world case studies demonstrating the benefits of digital trust in e-commerce Advocacy for regulations that support SMBs in implementing digital trust solutions Conclusion

The e-commerce sector, particularly SMBs, urgently needs robust digital trust solutions to thrive in the digital economy. By adopting frameworks like the PCTF, SMBs can enhance their competitiveness, build consumer trust, and drive innovation.

Together, we can create an e-commerce ecosystem that empowers SMBs, protects consumers, and solidifies Canada’s position as a leader in the global digital marketplace.

Download available here.

DIACC-Position-Digital-Trust-to-Fuel-E-Commerce-Growth-and-Empower-SMBs-ENG

Elastos Foundation

World Computer Initiative (WCI) – March 2025 Community Update!

Before diving into our latest progress, let’s set the stage with a powerful vision. We’re building a new internet—one where every person owns a personal virtual computer that runs on any device, connects via decentralized identity, and taps into a personal NAS station aka private cloud. This personal cloud, in turn, links easily with a […]

Before diving into our latest progress, let’s set the stage with a powerful vision. We’re building a new internet—one where every person owns a personal virtual computer that runs on any device, connects via decentralized identity, and taps into a personal NAS station aka private cloud. This personal cloud, in turn, links easily with a peer-to-peer (P2P) network of millions of other user-operated nodes. By giving everyone the ability to run personal AIs on their locally encrypted data, and by conducting markets under blockchain law, we eliminate the need for centralized conglomerates and usher in a new peer-to-peer paradigm in digital empowerment.

 

Illustrating the World Computer Vision

Virtual Computer with Decentralized Identity. Users login to a “virtual computer” using decentralized credentials—no giant databases or Big Tech gatekeepers. This OS can be accessed from any device, anywhere in the world. Personal NAS Station (Your Private Cloud). Each user owns a small, cost-effective NAS device at home. It encrypts and stores your files, hosts your private AI models, and manages IoT devices—fully under your control. P2P Networking. Your NAS station talks directly to millions of other devices on a global P2P network, free from centralized servers or censorship. The result: a new, user-owned internet where no single corporation can block or monitor your activity. Blockchain Law. Smart contracts handle payments and manage rights. Ownership, royalties, subscriptions, or AI service fees become transparent and tamper-proof.

When we merge decentralized identity with a personal cloud (NAS) and blockchain-governed markets, we re-engineer the web from the ground up—ensuring user autonomy, private AI compute, and unstoppable peer-to-peer trade. This synergy is the heart of Elastos and the World Computer Initiative.

 

Introduction: What is the World Computer Initiative?

Welcome to the latest update on the World Computer Initiative (WCI)—an ambitious project dedicated to building a truly decentralized, secure, and privacy-oriented computing ecosystem! Our mission is to return control of the digital realm to individuals, away from centralized corporations and exploitative data-theft business models. On January 31, 2025, the Cyber Republic Council (the Elastos DAO) PASSED Proposal #180, titled “Empowering ElastOS: The World Computer Initiative.” Proposed by Sash | Elacity and referred by Rong Chen, this motion secures funding and official CR backing to accelerate:

Elacity dDRM v2 rollouts (the consumer-facing Elastos dApp and digital asset marketplace). Core development for the “World Computer” vision, bridging Elacity’s dDRM platform with Rong Chen’s dream for a PC2.net-based Virtual Computer OS (“ElastOS”). DePIN Partnerships: Integrating decentralized NAS station hardware and personal cloud nodes into the Elastos ecosystem. Technical Milestones: Weekly or monthly updates on a roadmap spanning from Q1 to Q3 2025, culminating in a Beta environment for “ElastOS v1” by the end of September 2025.

Key Motives:

Address Elastos’ “flagship product” gap by uniting marketing with real consumer-facing software—Elacity v3. Align with VC feedback urging Rong Chen to reclaim his “founding father of Web3” leadership role, focusing on a true World Computer for 2025. Scale Elacity to handle advanced NFT-based goods (royalties, time-based subscriptions, private & public channels) and eventually expand to software, gaming, AI, and more. Coordinate multiple engineering teams (Elacity, OS, DePIN) and remain agile while providing consistent accountability and transparent monthly updates to the CRC.

 

Recent World Computer Strategic Developments

1. The Home NAS Device: Bringing Decentralization Home

We’re excited to confirm that production for our new Home NAS device will be ready in ~3–4 months. But what is a NAS?

NAS (Network Attached Storage) is a personal storage device on your home network. It helps you store data privately (vs. corporate clouds). Our Home NAS takes it further: it’s decentralized, runs apps locally, encrypts your data, and pairs with IoT/smart-home devices.

Key Hardware Features

Two hard drive slots for secure storage. Ethernet, Wi-Fi, HDMI, plus a front-facing touch display (borrowed from low-cost e-moped control panels). Android/Linux OS for a friendly, plug-and-play user experience, including Google TV support. Multiple secure login options: WeChat (QR pairing), Web2 logins, Web3 wallet logins (Elastos, Particle Network).

Why It Matters

Turnkey “private cloud” at home—no one else can see or manage your data. AI at Home: Imagine running local AI models (chatbots, personal assistants) on the NAS, so your data is never shared externally. Connect from anywhere: Remotely access your Home NAS globally with full encryption.

 

2. Digital Photo Frames as Decentralized Displays

The Shanghai (DePIN) hardware team already is producing digital photo frames that can be rebranded or customized with advanced features:

They may soon integrate WebAssembly (WASM) to run lightweight, decentralized apps at near-native speed. Imagine a frame that securely displays photos and connects to your NAS—no corporate cloud in-between. The same logic extends to IoT devices: with a private cloud (the Home NAS), you can link health devices, home robotics, or even your smartwatch, all under your control. 3. Introducing Puter: Your Personal Decentralized Cloud

As the base for runtime, we’re integrating the Puter platform—a “virtual computer” or “Web3 OS”—with the Home NAS. But what is Puter?

A decentralized personal cloud: self-host your files, run compute tasks, and enable peer-to-peer social and data interactions. Google Drive without Google. Dropbox without Dropbox. Everything is run on your own hardware, so you decide who accesses your data.

Integration Steps

Standalone Puter Software: A version that supports wallet logins (Elastos or Particle). Home NAS Integration: Turn your NAS into a personal cloud, seamlessly connected to Puter. P2P Features: Encrypted messaging, file sharing, NFT-based digital goods. Long-term: Puter apps become NFTs—buy, sell, or trade them securely on a decentralized marketplace (e.g., Elacity).

 

4. Secure, Decentralized Authentication via Particle Network

We’re currently implementing Particle Network for decentralized Web3 logins—estimated to complete in ~4 to 6 weeks. But why move to Web3 logins?

Centralized logins depend on big databases, which are hackable or censorable. Particle Network uses crypto wallet authentication with abstraction—radically increasing security and privacy but also enabling a friendly user experience.

 

5. Elacity dDRM & NFTs: Redefining Digital Trade

A cornerstone of the World Computer is NFT-based digital rights management (DRM):

Every application, service, or piece of content can become an NFT—enabling transparent royalties, licensing, and monetization. Elacity’s decentralized DRM platform is launching version 3 in ~3 weeks, letting creators mint and sell audio and video digital goods directly, no middlemen required. The continuation of this technology will emerge as SDKs and become available in Puter.

 

We plan to make the Home NAS P2P-capable, so devices talk directly—no central server.

IPFS is a key technology for distributed file storage; files are chunked, encrypted, and stored across multiple nodes. The NAS might maintain two partitions: Private (encrypted) partition for personal data. Shared IPFS partition for optionally “publishing” or sharing files globally and earning token rewards.

SuperEXE: Ensuring Trustworthy Computing. We’re exploring SuperEXE, an additional secure runtime environment that verifies modules and services before they run. So why is SuperEXE valuable?

Traditional apps can be tampered with or hacked. SuperEXE cryptographically verifies each piece of software, ensuring it’s genuine and not compromised for general purpose digital capsules.

This creates what we call a three-dimensional trust model

Users trust the hardware and OS. Creators trust the runtime to protect their IP from piracy or tampering. Service providers (e.g., networks, streaming) trust the device’s authentication and proof of ownership (NFT-based).

 

Extended World Computer Discussion and Roadmap Hardware Partnerships DePIN Team Producing the Home NAS and photo frames, can rebrand them (e.g., “Elacity”). Seeking international market exposure; sees Web3 devices as a strategic edge. Mini-Apps & Ecosystem 50–100 “mini-apps” (WeChat backup, iWatch data export, note-taking, etc.) ready to be ported. Interested in exploring decentralized capabilities on top of Puter + Home NAS.

 

Proposed Roadmap Short-Term Release Elacity v3 for NFT-based DRM. Integrate Particle login (~4–6 weeks) to Puter. Finalize the initial Home NAS MVP (private data + simple Web3 login). Onboard the NAS device into a “vertical market” prototype (storage, NFT-based content, wallet logins). Collaborate with Beijing mini-app developers to broaden use cases. Medium-Term Deploy advanced P2P networking (PC2.net). Expand mini-app ecosystem to handle tasks like AI, advanced IoT, or specialized data management. Bridge everything with a user-friendly “World Computer OS.” Long-Term Implement SuperEXE at scale: no tampering, no unverified modules. Build an advanced decentralized marketplace for apps, digital goods, and services, all NFT-based. Achieve a full “Web3 World Computer” vision, bridging hardware, software, and blockchain into an all-in-one portal.

We welcome community input and collaboration. Let’s build the future—together. Thank you for reading this comprehensive overview. Feel free to share it with your community or pass it to an AI for podcast creation. Together, we’re forging a new internet built on decentralization—one personal device at a time.

 


FIDO Alliance

MobileIDWorld: Tech Giants Microsoft, Google, and Apple Drive Global Passkey Adoption with Visa Support

Major technology companies Microsoft, Google, and Apple are driving widespread adoption of passkeys as an alternative to traditional passwords, leveraging biometric authentication methods like facial recognition and fingerprint scanning for […]

Major technology companies Microsoft, Google, and Apple are driving widespread adoption of passkeys as an alternative to traditional passwords, leveraging biometric authentication methods like facial recognition and fingerprint scanning for enhanced security and user convenience. The initiative builds on the FIDO Alliance standards that these companies have been developing since 2019.

The initiative, which began with a joint announcement by the three tech giants in 2022, has now reached full implementation across all major platforms. Users can access passkey functionality through their devices’ built-in biometric systems, enabling seamless authentication across various services and applications. Microsoft has recently announced plans to implement passkeys for over one billion users in response to a 200 percent increase in cyberattacks.


Security Infowatch: How FIDO Can Safeguard Against Advanced Cyber Threats

 FIDO as the Future of Authentication: Traditional password-based systems are vulnerable to phishing, credential stuffing, and other cyberattacks. FIDO (Fast Identity Online) uses public key cryptography to deliver phishing-resistant, passwordless authentication. Implementation Roadmap: Organizations should assess current authentication methods, educate stakeholders, select FIDO-compatible solutions, and roll out the technology gradually to maximize security and user adoption. Security Meets Usability: FIDO enhances security and simplifies the user experience with biometrics, hardware tokens, and multi-device passkeys, offering both protection and convenience.


Forbes: AI Can Crack Your Passwords Fast—6 Tips To Stay Secure

Do you think your trusty 8-character password is safe? In the age of AI, that might be wishful thinking. Recent advances in artificial intelligence are giving hackers superpowers to crack […]

Do you think your trusty 8-character password is safe? In the age of AI, that might be wishful thinking. Recent advances in artificial intelligence are giving hackers superpowers to crack and steal account credentials. Researchers have demonstrated that AI can accurately guess passwords just by listening to your keystrokes. By analyzing the sound of typing over Zoom, the system achieved over 90% accuracy in some cases.

And AI-driven password cracking tools can run millions of guess attempts lightning-fast, often defeating weak passwords in minutes. It is no surprise, then, that stolen or weak passwords contribute to about 80% of breaches​.

The old password model has outlived its usefulness. As cyber threats get smarter, it is time for consumers to do the same.


MobileIDWorld: Google Replacing Gmail SMS Authentication with QR Code Verification System

Google has announced plans to phase out SMS-based authentication for Gmail accounts in favor of more secure methods like QR code verification and passkeys. The change follows similar moves by other […]

Google has announced plans to phase out SMS-based authentication for Gmail accounts in favor of more secure methods like QR code verification and passkeys. The change follows similar moves by other tech giants like Microsoft and Apple to strengthen authentication methods as part of the company’s broader security enhancement initiatives.


Biometric Update: Passkeys for enterprise report from FIDO says adoption is growing

A new report from the FIDO Alliance aims to understand the state of passkey deployments by enterprises in the U.S. and UK, including methods for deploying FIDO passkeys, total employees enrolled and […]

A new report from the FIDO Alliance aims to understand the state of passkey deployments by enterprises in the U.S. and UK, including methods for deploying FIDO passkeys, total employees enrolled and perceived barriers to deployment.

Based on a survey of 400 IT professionals (200 from each country), the report says passkey adoption for employee sign-ins is a high or critical priority for two thirds of respondents, and that the majority of enterprises have “either deployed or are in the midst of deploying passkeys with goals tied to improved user experience, enhanced security and standards/regulatory compliance.”


Identity Week: New FIDO Alliance report: 87% of enterprises in the U.S. and UK are deploying passkeys

The FIDO Alliance along with underwriters Axiad, HID, and Thales today released its State of Passkey Deployment in the Enterprise report, finding that 87% of surveyed companies have, or are in the midst of, rolling […]

The FIDO Alliance along with underwriters AxiadHID, and Thales today released its State of Passkey Deployment in the Enterprise report, finding that 87% of surveyed companies have, or are in the midst of, rolling out passkeys with goals tied to improved user experience, enhanced security, and compliance.

Monday, 10. March 2025

OpenID

Notice of Vote for Proposed Implementer’s Draft of OpenID4VC High Assurance Interoperability Profile

The official voting period will be between Tuesday, March 25, 2025 and Tuesday, April 1, 2025 (12:00pm PT), once the 45 day review of the specification has been completed. For the convenience of members who have completed their reviews by then, early voting will begin on Tuesday, March 18, 2025.   The Digital Credentials Protocols (DCP) working […] The post Notice of Vote for Prop
The official voting period will be between Tuesday, March 25, 2025 and Tuesday, April 1, 2025 (12:00pm PT), once the 45 day review of the specification has been completed. For the convenience of members who have completed their reviews by then, early voting will begin on Tuesday, March 18, 2025.   The Digital Credentials Protocols (DCP) working group page is  https://openid.net/wg/digital-credentials-protocols/. If you’re not already an OpenID Foundation member, or if your membership has expired, please consider joining to participate in the approval vote. Information on joining the OpenID Foundation can be found at https://openid.net/foundation/members/registration.   The vote will be conducted at https://openid.net/foundation/members/polls/355.   Marie Jordan – OpenID Foundation Secretary   About the OpenID Foundation

The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.   

The post Notice of Vote for Proposed Implementer’s Draft of OpenID4VC High Assurance Interoperability Profile first appeared on OpenID Foundation.


Hyperledger Foundation

LF Decentralized Trust Mentorship Spotlight: CC-Tools Support for Fabric Private Chaincode

Participating in the Hyperledger Fabric CC-Tools Support for Fabric Private Chaincode project through the LF Decentralized Trust Mentorship program has been an exciting and enriching experience. This journey has challenged me technically, expanded my problem-solving skills, and deepened my understanding of open source collaboration. Throughout the project, I contributed to two open sour

Participating in the Hyperledger Fabric CC-Tools Support for Fabric Private Chaincode project through the LF Decentralized Trust Mentorship program has been an exciting and enriching experience. This journey has challenged me technically, expanded my problem-solving skills, and deepened my understanding of open source collaboration. Throughout the project, I contributed to two open source projects while learning from the community, refining my skills, growing as a developer, and having a lot of fun.


Oasis Open

EU Cyber Acts Conference 2025: A Deep Dive into Securing AI

On March 25, 2025, the EU Cyber Acts Conference in Brussels will bring together cybersecurity professionals, policy-makers, and industry leaders to discuss one of the most pressing challenges of our time—securing artificial intelligence. Several members of the Coalition for Secure AI (CoSAI) will be participating and presenting at the conference. A key highlight of the […] The post EU Cyber Acts

By Omar Santos, Distinguished Engineer, Cisco

On March 25, 2025, the EU Cyber Acts Conference in Brussels will bring together cybersecurity professionals, policy-makers, and industry leaders to discuss one of the most pressing challenges of our time—securing artificial intelligence. Several members of the Coalition for Secure AI (CoSAI) will be participating and presenting at the conference. A key highlight of the conference is the “AI Cyber Day” track, which zeroes in on the global evolution of cybersecurity certification frameworks tailored for AI systems.

Securing AI Applications and Agentic Systems

I’m truly honored to be among such a distinguished lineup of speakers, presenting “Securing AI: Navigating Security Challenges in Modern AI Implementations.” I am excited to share insights on the evolving threats and best practices for securing AI systems! In this session, we will unpack the layered security considerations inherent to today’s AI implementations. Attendees will be guided through the best practices surrounding AI operations (AI Ops), model development, fine-tuning, and deployment of AI applications. Emphasis will be placed on well-known techniques such as Retrieval Augmented Generation (RAG) and how to secure innovative agentic systems (methods that represent the forefront of modern AI security strategies).

The session also introduces the Coalition for Secure AI (CoSAI), an open project that is bringing experts from industry-leading organizations dedicated to sharing best practices for secure AI.  

CoSAI’s focus is to better equip the community to fortify the AI supply chain, equip defenders for emerging threats, secure agentic AI systems, and promote robust security risk governance frameworks. As the digital ecosystem evolves, CoSAI will help organizations of all sizes to secure their AI implementations.

Both the EU and the US are spearheading efforts to create harmonized regulatory frameworks that set common frameworks, methodologies, and promote innovation. There is a common realization that we all need to have a balance that doesn’t hinder innovation. During AI Cyber Day, experts will discuss:

The current state of regulatory development in both regions. Future outlooks for AI security regulation. Best practices for robust risk management and promoting innovation. Collaborative Panel Discussion: Charting the Future of AI Security

Complementing the technical presentations, the conference will feature a panel discussion titled “Collaborative Efforts to Secure AI and AI Applications and Services.” In this session, my colleague Piotr Ciepiela, from EY, will highlight the role of CoSAI in advancing secure AI practices.

This session will bring together influential voices from governments, industry, and academia to share insights on building a safer AI ecosystem. The discussion will focus on:

Developing and sharing best practices and tools for secure AI deployment. Collaborative strategies to navigate the challenges of securing AI applications. The role of CoSAI in uniting diverse stakeholders under a common mission of enhanced AI security.

The panel includes a distinguished lineup of speakers:

(Moderator) Matthias Intemann, Head of Digitisation, Bundesamt für Sicherheit in der Informationstechnik (BSI), Germany Franziska Weindauer, CEO of TÜV AI.Lab, TÜV-Verband, Germany Piotr Ciepiela, Partner and EMEIA Cybersecurity Leader at EY, Poland Ezi Ozoani, Representative to GPAI Code of Practice at Applied AI Institute for Europe GmbH, Ireland           . Looking Forward to the Discussion

The spotlight on AI security is more intense than ever. With evolving threats and rapid technological advancements, the EU Cyber Acts Conference offers invaluable insights for anyone involved in the AI space.

Whether you’re an industry veteran, a policymaker, or simply curious about the future of AI, these presentations are set to provide great insights to what’s new in AI and cybersecurity. Mark your calendars for March 25-26, 2025, and join the conversation on securing the future of AI.

Additional Exciting News

CoSAI is part of the OASIS Open ecosystem where many other related technical initiatives are happening. A new one worth noting is the Data Provenance Standards (DPS) Technical Committee which links to a shared focus on ensuring trust and security of AI systems, as well as great business outcomes. DPS will develop a standardized metadata framework for tracking data origins, transformations, and compliance, helping organizations establish clearer governance practices. Visit the OASIS Open website to learn more about this new initiative and others.

The post EU Cyber Acts Conference 2025: A Deep Dive into Securing AI appeared first on OASIS Open.

Friday, 07. March 2025

Oasis Open

Invitation to comment on Energy Interoperation Common Transactive Services (CTS) v1.0 CSD05

OASIS and the Energy Interoperability TC are pleased to announce that Energy Interoperation Common Transactive Services (CTS) Version 1.0 CSD05 is now available for public review and comment.  Common Transactive Services (CTS) permits energy consumers and producers to interact through energy markets by simplifying actor interaction with any market. CTS is a streamlined and simplified […] Th

Public review -- ends March 23

OASIS and the Energy Interoperability TC are pleased to announce that Energy Interoperation Common Transactive Services (CTS) Version 1.0 CSD05 is now available for public review and comment. 

Common Transactive Services (CTS) permits energy consumers and producers to interact through energy markets by simplifying actor interaction with any market. CTS is a streamlined and simplified profile of the OASIS Energy Interoperation (EI) specification, which describes an information and communication model to coordinate the exchange of energy between any two Parties that consume or supply energy, such as energy suppliers and customers, markets and service providers.

The TC is specifically requesting comments on the following sections:

Section 2.4 (Responses)  Section 9 (Negotiation Facet)  Section 13 (Market Structure and Reference Data) Section 14 (Conformance)

Energy Interoperation Common Transactive Services (CTS) Version 1.0
Committee Specification Draft 05
17 February 2025

Editable Source: PDF: https://docs.oasis-open.org/energyinterop/ei-cts/v1.0/csd05/ei-cts-v1.0-csd05.pd (Authoritative) 

HTML: https://docs.oasis-open.org/energyinterop/ei-cts/v1.0/csd05/ei-cts-v1.0-csd05.html

DOCX: https://docs.oasis-open.org/energyinterop/ei-cts/v1.0/csd05/ei-cts-v1.0-csd05.docx

For your convenience, OASIS provides a complete package of the specification document and any related files in a ZIP distribution file. You can download the ZIP file at:  

https://docs.oasis-open.org/energyinterop/ei-cts/v1.0/csd05/ei-cts-v1.0-csd05.zip

How to Provide Feedback

OASIS and the Energy Interoperability TC value your feedback. We solicit input from developers, users and others, whether OASIS members or not, for the sake of improving the interoperability and quality of its technical work.

The public review is now open and ends March 23, 2024 at 23:59 UTC.

Comments from TC members should be sent directly to the TC’s mailing list. Comments may be submitted to the project by any other person through the use of the project’s Comment Facility: https://groups.oasis-open.org/communities/community-home?CommunityKey=70a647c6-d0e6-434c-8b30-018dce25fd35

Comments submitted for this work by non-members are publicly archived and can be viewed by using the link above and clicking the “Discussions” tab.

Please note, you must log in or create a free account to see the material. Please contact the TC Administrator (tc-admin@oasis-open.org) if you have any questions regarding how to submit a comment.

All comments submitted to OASIS are subject to the OASIS Feedback License, which ensures that the feedback you provide carries the same obligations at least as the obligations of the TC members. In connection with this public review, we call your attention to the OASIS IPR Policy [1] applicable especially [2] to the work of this technical committee. All members of the TC should be familiar with this document, which may create obligations regarding the disclosure and availability of a member’s patent, copyright, trademark and license rights that read on an approved OASIS specification. 

OASIS invites any persons who know of any such claims to disclose these if they may be essential to the implementation of the above specification, so that notice of them may be posted to the notice page for this TC’s work.

Additional information about the specification and the Energy Interoperability TC’s can be found at the TC’s public home page: https://www.oasis-open.org/committees/energyinterop/

Additional references:

[1] https://www.oasis-open.org/policies-guidelines/ipr/

[2] https://www.oasis-open.org/committees/energyinterop/ipr.php

The post Invitation to comment on Energy Interoperation Common Transactive Services (CTS) v1.0 CSD05 appeared first on OASIS Open.


Origin Trail

UMANITEK: Setting the standard for internet safety

Today, artificial intelligence (AI) is rapidly reshaping the Internet, driving a historic transformation in how we engage, work, and communicate online. However, the rise of generative AI has also led to an explosion of deepfakes, hallucinating language models, and the rapid creation of untrustworthy content — threatening the foundation of authentic communication and learning. AI-generated conten

Today, artificial intelligence (AI) is rapidly reshaping the Internet, driving a historic transformation in how we engage, work, and communicate online.

However, the rise of generative AI has also led to an explosion of deepfakes, hallucinating language models, and the rapid creation of untrustworthy content — threatening the foundation of authentic communication and learning. AI-generated content now dominates the internet, making it increasingly difficult to distinguish reality from fabrication.

While AI unlocks significant advancements, it also introduces equally substantial risks, from intellectual property infringements to illegal content, such as child sexual abuse materials.

It is for this reason, we founded umanitek.

At umanitek, our mission is to fight against harmful content and the risks of AI by promoting technology that serves the greater good of humanity.

Our founders, Trace Labs, Ethical Capital Partners and AMYP Ventures AG (part of a Piëch/Porsche Family Office) bring together their capabilities in building reliable and trusted AI systems, their connection to networks that fight for the removal of internet harm, and their ability to raise awareness of the importance of knowledge and education in the age of AI.

But this is too big of a challenge to go at it alone. Recognizing the magnitude of this issue, we actively seek partnerships with institutions and individuals dedicated to ethical AI development. We want to partner with investors who are focused on “tech for good” solutions where societal impact is of equal importance to commercial success and to work with tech leaders, policymakers, and law enforcement to make internet safety the standard in the age of AI.

Balancing innovation with responsibility in the age of AI.

Our vision is to leverage umanitek’s technology to enable corporations and individuals to control their data, technology, and resources without compromising security, privacy, or intellectual property.

Here’s but one quick example of how umanitek will work.

Far too many people are concerned about the non-consensual sharing of their personal images or those of their children. Umanitek will enable companies, law enforcement, NGOs, and individuals to upload “fingerprints” of personal photos to a decentralized directory. This system will help large technology platforms identify and prevent the distribution of such content.

In the potential next step, it also significantly streamlines the prosecution of offenders by collaborating with law enforcement while reducing the cost and complexity of legal action related to copyright infringements.

When organizations and individuals can choose what to share and how to share it in a secure and verifiable way, all internet users benefit. Protecting legitimate content and preventing large language models from training on non-consensual data are integral to harm reduction online. We believe this is an important step to making internet safety the standard in the age of AI, reducing harmful content, and enabling trusted AI solutions.

Fighting the good fight.

“We invested in OriginTrail to drive transparency and trust for real-world assets. Now, we’ve co-founded umanitek to combat harmful content, IP infringements, and fake news — leveraging OriginTrail technology across internet platforms.”

— Chris Rynning, AMYP Ventures AG (part of a Piëch/Porsche Family Office)

An unprecedented alliance for ethical AI.

Umanitek stands out by combining the expertise of three leaders in their fields:

Trace Labs (core developers of OriginTrail) — The pioneers of neuro-symbolic AI, building trusted and verifiable AI systems. They are the developers behind the OriginTrail Decentralized Knowledge Graph (DKG), a technology that enhances trust in AI, supply chains, and global data ecosystems.

Ethical Capital Partners (ECP) — A private equity firm seeking out investment and advisory opportunities in industries that require principled ethical leadership. Founded in 2022 by a multi-disciplinary team with legal, regulatory, law enforcement, public engagement, and finance experience, ECP’s philosophy is rooted in identifying companies amenable to a responsible investment approach and working collaboratively with management teams in order to develop strategies to create value and drive growth.

AMYP Ventures AG (part of a Piëch/Porsche Family Office) — A venture capital group backing game-changing AI and Web3 initiatives with the potential for global impact.

This is a collaboration that combines the knowledge of AI, cutting-edge research, and technology with ethical investment strategies to create the standard for internet safety in the age of AI — an AI solution that will serve humanity.

Subscribe for updates at umanitek.ai to stay in touch and be among the first to learn about cofounders, contributors, and partners of umanitek, as well as reserve a spot to test-drive umanitek’s products at their release.

Web | Twitter LinkedIn

UMANITEK: Setting the standard for internet safety was originally published in OriginTrail on Medium, where people are continuing the conversation by highlighting and responding to this story.


OpenID

Webinar on IPSIE secures more than 300 registrations

More than 300 identity security leaders and identity professionals registered for a recent webinar where our Executive Director Gail Hodges discussed with industry experts Jeff Reich, Dean H. Saxe, Aaron Parecki and George Fletcher, how enterprises can achieve secure, interoperable identity management using multiple standards, new enterprise interoperability profiles to strengthen security and str

More than 300 identity security leaders and identity professionals registered for a recent webinar where our Executive Director Gail Hodges discussed with industry experts Jeff Reich, Dean H. Saxe, Aaron Parecki and George Fletcher, how enterprises can achieve secure, interoperable identity management using multiple standards, new enterprise interoperability profiles to strengthen security and streamline identity management.

The webinar – Securing the Future of Identity with IPSIE – A New Industry Standard – introduced the OpenID Foundation’s IPSIE Working Group, which is tackling this challenge head-on with new interoperability profiles to strengthen security and streamline identity management.

You can listen to the webinar here.

The OpenID Foundation urges SaaS providers and enterprises to get in involved and be a part of shaping a new standard that will strengthen identity security across the enterprise landscape. For further details on how to participate, please visit the IPSIE Working Group page. 

The post Webinar on IPSIE secures more than 300 registrations first appeared on OpenID Foundation.

Thursday, 06. March 2025

Oasis Open

OASIS to Advance Global Adoption of Data & Trust Alliance’s Data Provenance Standards

Boston, MA, and New York, NY, USA; 6 March 2025 – OASIS Open, a global open source and standards organization, and the Data & Trust Alliance, a consortium dedicated to developing data and AI practices that create business value and earn trust, announced the upcoming launch of the OASIS Data Provenance Standards Technical Committee (DPS […] The post OASIS to Advance Global Adoption of Data &a

Cisco, IBM, Intel, Microsoft, Red Hat, and Others Unite to Promote Cross-Industry Standards for Traditional Data and AI Applications

Boston, MA, and New York, NY, USA; 6 March 2025 – OASIS Open, a global open source and standards organization, and the Data & Trust Alliance, a consortium dedicated to developing data and AI practices that create business value and earn trust, announced the upcoming launch of the OASIS Data Provenance Standards Technical Committee (DPS TC). Building on version 1.0.0 of the Data Provenance Standards created by the Data & Trust Alliance’s cross-industry Working Group, the TC will bring more enterprises to the table to create de jure technical standards that aim to advance data transparency, accountability, and trust. Founding sponsors include Cisco, IBM, Intel, Microsoft, and Red Hat. 

“The Data & Trust Alliance has done exceptional work in developing the Data Provenance Standards, and OASIS is privileged to partner with them to expand the community actively developing and implementing these standards,” said Jim Cabral, Interim Executive Director of OASIS Open. “By advancing these standards in our open, consensus-driven environment, we ensure their continued evolution, interoperability, and adaptability to meet evolving industry demands.”

With AI and data-driven decision-making now central to business operations, organizations require robust mechanisms to verify data lineage, transformations, and compliance. The DPS TC will develop a standardized metadata framework for tracking data origins, transformations, and compliance, helping businesses establish clearer governance practices. The TC will also define metadata models that span databases, tables, and data pipelines to ensure interoperability and reliability across different platforms. 

“For AI to create value for business and society, the data that trains and feeds models must be trustworthy. Launching the Data Provenance Standard Technical Committee marks a milestone in fostering greater transparency and trust of AI-driven data,” said Saira Jesani, Executive Director, Data & Trust Alliance. “We look forward to bringing the TC’s expertise to bear to not only refine these standards but also bridge the gap between standards and implementation, as we drive towards industry-wide adoption.” 

The standards will enable data producers to deliver clear and consistent data lineage information; support companies in managing compliance and mitigating risks associated with data privacy, security, and intellectual property rights; provide data acquirers with transparency around the data they aim to acquire and a mechanism to determine whether to trust and use the data on offer, request changes to the data set, or reject its use; and help end-users by providing transparency into how their data is managed and protected, fostering trust in data-driven solutions. 

The DPS TC’s first meeting will be held on 8 April 2025. Participation in the DPS TC is open to all through membership in OASIS. Organizations, industry leaders, and experts are encouraged to join and actively contribute to these data provenance standards that will shape the future of transparent and trusted data governance. For more information, please visit the DPC TC’s homepage.

About OASIS Open
One of the most respected, nonprofit open source and open standards bodies in the world, OASIS advances the fair, transparent development of open source software and standards through the power of global collaboration and community. OASIS is the home for worldwide standards in AI, emergency management, identity, IoT, cybersecurity, blockchain, privacy, cryptography, cloud computing, urban mobility, and other content technologies. Many OASIS standards go on to be ratified by de jure bodies and referenced in international policies and government procurement. 

About Data & Trust Alliance
The Data & Trust Alliance was founded as a not-for-profit consortium to bring together leading businesses and institutions across multiple industries to learn, develop, and adopt responsible data and AI practices. Data & Trust Alliance member companies span 15 industries, operate in more than 175 countries, and generate more than $1.6 trillion in annual revenues.

Media Inquiries:
OASIS Open: communications@oasis-open.org
Data & Trust Alliance: inquiries@dataandtrustalliance.org

Additional Information:
DPS TC Project Charter

Support for the Data Provenance Standards Technical Committee:  

Cisco: 

“I applaud the OASIS community for its forward-thinking creation of the Data Provenance Standards TC. By creating standardized descriptors at the point of data creation, we are forging a path that empowers organizations to safeguard data integrity, security, and privacy throughout its entire lifecycle. This is essential for both AI-driven and traditional applications. These standards will not only enhance transparency and accountability, but also lay the foundation for robust, cross-industry data governance.” 

–Omar Santos, Distinguished Engineer at Cisco, OASIS Board Member

IBM:

“IBM is proud to build on its partnership with the Data & Trust Alliance to become a founding member of the OASIS Data Provenance Standards Technical Committee. As a contributor to the Data & Trust Alliance’s Data Provenance Standards, we are pleased that the DPS TC will evolve the critical work of advancing data transparency started by the Data & Trust Alliance. We look forward to helping organizations accelerate the business impact of AI through trust in our work with the DPS TC.”

–Christina Montgomery, Vice President and Chief Privacy & Trust Officer, IBM

Microsoft:

“Security and Trust remain at the top of mind while Microsoft executes its mission of empowerment. We promote and demand this ethos while developing operationally efficient and trustworthy AI systems. To do this successfully, we believe that full transparency into the data used including where it comes from, how it’s created, and whether it can be used legally is of extreme importance. As a founding member of the Data Provenance Standard Technical Committee (DPS TC), Microsoft will partner with similarly committed organizations towards creating industry standards for ensuring that AI systems are built with transparency, accountability, and trust through establishing data provenance standards as a foundation for improved data governance. Through membership and partnership in the DPS TC, Microsoft continues its commitment to empower every person and every organization on the planet to do more…securely.” 

–Raghu Ramakrishnan, CTO for DATA, Technical Fellow, R&D Azure Data, Microsoft

Red Hat:  

“Red Hat is proud to join the OASIS Data Provenance Standard Technical Committee. With AI rapidly evolving, it’s crucial we address the provenance challenge, together as a community, to help maintain data integrity and user trust. Red Hat is eager to collaborate on keeping security and compliance measures at the forefront of AI development, and we look forward to how this initiative will help unify our efforts toward an open and trusted AI ecosystem.”

–Vincent Danen, VP, Product Security, Red Hat

The post OASIS to Advance Global Adoption of Data & Trust Alliance’s Data Provenance Standards appeared first on OASIS Open.


OpenID

OpenID Federation Interop Event, April 28-30, 2025

The OpenID Foundation will be holding an interop testing event Monday-Wednesday, April 28-30, 2025 for OpenID Federation implementations.  In-person participation is encouraged, but if it works better for you, you can also participate remotely. The event will be hosted by SUNET at their office in Stockholm, Sweden. Event Details: 📅 Date: Monday-Wednesday, April 28-30, 2025 🕒 […] The po

The OpenID Foundation will be holding an interop testing event Monday-Wednesday, April 28-30, 2025 for OpenID Federation implementations.  In-person participation is encouraged, but if it works better for you, you can also participate remotely.

The event will be hosted by SUNET at their office in Stockholm, Sweden.

Event Details:

Date: Monday-Wednesday, April 28-30, 2025

Time: Mid-day Monday to mid-day Wednesday

Location: SUNET, Tulegatan 11, Third Floor, 113 53 Stockholm, Sweden

Suggested Hotels:

Hotel Birger Jarl – Across the street from SUNET Best Western Kom Hotel Stockholm Hotel Hellsten

Virtual Option: Details on how to participate remotely will be emailed to registrants nearer the time

What is the interop event and who is eligible?

Come test your OpenID Federation implementation with those by others.  You need not have a complete implementation to participate.  You’ll also be trying out the OpenID Certification tests being developed for OpenID Federation.

Why Attend? Collaborate with others using OpenID Federation. Validate the specification, helping us advance it towards final status. Validate your implementation, identifying areas that you can improve. Test the tests, laying the groundwork for OpenID Federation certification testing. Interop Topics Include: Profiles: Automatic Registration, Explicit Registration, Federation Wallet, Extended Listing Metadata: Metadata Resolution, Metadata Policy Testing Trust Marks: Schema validation, Delegated Trust Marks, Trust Mark Status Topologies: Multiple Trust Anchors, Inter-federation where a Trust Anchor is subordinate to another Trust Chains Resolvers Historical Keys Error Scenarios Certification: Test the OpenID Federation certification tests! Bonus: Try out Giuseppe De Marco’s OpenID Federation browser

Watch this space for updates as the dates approach.

Please register your interest in participating in the list of potential participants.

The post OpenID Federation Interop Event, April 28-30, 2025 first appeared on OpenID Foundation.


DIF Blog

DIF Newsletter #49

March 2025 DIF Website | DIF Mailing Lists | Meeting Recording Archive Table of contents Decentralized Identity Foundation News; 2. Working Group Updates; 3 Special Interest Group Updates; 4 User Group Updates; 5. Announcements; 6. Community Events; 7. DIF Member Spotlights; 8. Get involved! Join DIF 🚀 Decentralized Identity Foundation News Creator

March 2025

DIF Website | DIF Mailing Lists | Meeting Recording Archive

Table of contents Decentralized Identity Foundation News; 2. Working Group Updates; 3 Special Interest Group Updates; 4 User Group Updates; 5. Announcements; 6. Community Events; 7. DIF Member Spotlights; 8. Get involved! Join DIF 🚀 Decentralized Identity Foundation News Creator Assertions Working Group Joins DIF

DIF welcomes the Creator Assertions Working Group (CAWG) as its newest working group! CAWG builds upon C2PA's work by defining additional assertions allowing content creators to express intent about their content and bind their online identity to what they produce. This collaboration with ToIP will be chaired by Eric Scouten from Adobe, with meetings beginning March 10.

To participate, join DIF!

Welcoming Creator Assertions Working Group to DIF DIF is excited to welcome the Creator Assertions Working Group (CAWG) as a new working group of DIF. About CAWG CAWG builds upon the work of the Coalition for Content Provenance and Authenticity (C2PA) by defining additional assertions that allow content creators to express individual and organizational intent about their Decentralized Identity Foundation - BlogWorking Groups DIDComm User Group Expands!

The DIDComm User Group offers developers an open forum to learn about and implement DIDComm protocol. Colton Wolkins highlights three compelling reasons to join:

Seeing real-world demonstrations from implementers Getting help with technical challenges Learning about DIDComm adoption across industries and regions.

The group meets regularly to accommodate global participants.

3 reasons why you should join the DIDComm User Group Meeting Guest Blog By Colton Wolkins The DIDComm User Group Meeting is open to anyone interested in using and implementing DIDComm. This is a great place for developers to learn, ask questions, and share experiences! DIDComm (Decentralized Identity Communication) is rapidly becoming a fundamental protocol for secure, private, and interoperable messaging Decentralized Identity Foundation - BlogWorking Groups Cryptographic Pseudonyms: A Short History

Following IETF/IRTF's adoption of BBS Blind Signatures and BBS per-Verifier Linkability specifications, this comprehensive piece by Greg Bernstein, Dave Longley, Manu Sporny, and Kim Hamilton Duffy explores the evolution of cryptographic pseudonyms and their privacy features. The article examines how BBS signatures provide protections against credential fraud.

Cryptographic Pseudonyms: A Short History Guest blog by Greg Bernstein, Dave Longley, Manu Sporny, and Kim Hamilton Duffy Following the IETF/IRTF Crypto Forum Research Group’s adoption of the BBS Blind Signatures and BBS per Verifier Linkability (“BBS Pseudonym”) specifications, this blog describes historical context and details of cryptographic pseudonyms, as well as the Decentralized Identity Foundation - BlogWorking Groups DIF Hackathon 2024 Winners Wrap Up

The DIF Hackathon 2024 showcased remarkable innovation across multiple tracks including education and workforce solutions, reusable identity, and privacy-preserving authentication. Standout projects included ZKP implementations, verifiable credentials powering next-gen job boards, seamless hotel check-ins, and digital identity solutions for expats. See the full list of winners and their groundbreaking projects.

🚀 Celebrating Innovation: Winners of the DIF Hackathon 2024 The DIF Hackathon 2024 brought together builders from around the world to tackle some of the biggest challenges in decentralized identity. Across multiple tracks—including education and workforce solutions, reusable identity, and privacy-preserving authentication—participants developed creative applications that redefine how digital identity is used and trusted in the real Decentralized Identity Foundation - BlogLimari Navarrete 🛠️ Working Group Updates DID Methods Working Group

The group discussed categorization of DID methods, blockchain-based DID methods, the need for standardization in web-based methods and decentralized methods for government use cases, and the proposal for a new charter for the W3C Working Group. The group agreed on the need for further refinement and discussion before a vote could be held.

DID Methods meets bi-weekly at 9am PT/ noon ET/ 6pm CET Wednesdays

Identifiers and Discovery Working Group

The Identifiers & Discovery group discussed various work items, including Linked VPs and DID:webvh method, as well as open source code projects like the Universal Resolver. The group also explored the potential of biometric technology to create a private key directly from a person's face without storing any biometric data or involving a server, and considered the possibility of generating DIDs from biometric data.

Identifiers and Discovery meets bi-weekly at 11am PT/ 2pm ET/ 8pm CET Mondays

🪪 Claims & Credentials Working Group

DIF recently hosted a special Credential Schemas workshop focused on privacy-preserving age verification solutions. Led by Otto Mora (Privado ID) and Valerio Camiani (Crossmint), participants explored innovative approaches beyond traditional verification methods, including AI-based age estimation while maintaining strong privacy protections.

DIF Workshop Highlights Progress on Privacy-Preserving Age Verification Standards DIF recently hosted a special session of its Credential Schemas workshop focused on developing privacy-preserving solutions for age verification. Led by Otto Mora, Standards Architect at Privado ID, and Valerio Camiani, Software Engineer at Crossmint, the session explored the growing need for standardized age verification. The workshop addressed the increasing Decentralized Identity Foundation - BlogWorking Groups

Credential Schemas work item meets bi-weekly at 10am PT/ 1pm ET/ 7pm CET Tuesdays

Applied Crypto Working Group

The general Applied Crypto Working Group has finished a draft for a general trust model for ZKP self-attestations, and working through feedback. It's a great time to get involved.

The Crypto BBS+ Work Item group is addressing feedback from the CFRG Crypto Panel review.

BBS+ work item meets weekly at 11am PT/ 2pm ET/ 8pm CET Mondays
Applied Crypto Working Group meets bi-weekly at 7am PT/ 10am ET/ 4pm CET Thursdays

DIF Labs Working Group

The DIF Labs Show and Tell event was held on February 18, featuring three groundbreaking projects from the DIF Labs Beta Cohort. After three months of development and refinement, these projects demonstrated cutting-edge innovation in decentralized identity with real-world applications.

The featured projects included:

Ordinals Plus: Brian Richter presented a framework for implementing verifiable credentials on Bitcoin using Ordinal inscriptions. Linked Claims: Golda Velez, Agnes Koinange, and Phil Long demonstrated their system that combines attestations to build progressive trust. VerAnon: Alex Hache showcased a protocol for anonymous personhood verification using Semaphore and zero-knowledge proofs.

The event provided attendees the opportunity to engage directly with project creators, offer feedback, and network with industry leaders driving the future of identity. As the Beta program concludes, DIF Labs is now preparing for its next cohort and invites builders, startups, and innovators passionate about decentralized identity to get involved.

Read about DIF Labs here

DIF Labs meets on the 3rd Tuesday of the month at 8am PT/ 11am ET/ 5pm CET

DIDComm Working Group

The DIDComm WG is discussing the Trust Spanning Protocol (TSP) and its potential integration with DIDComm to leverage the best of both protocols.

DIDComm Working Group meets the first Monday of each month noon PT/ 3pm ET/ 9pm CET

If you are interested in participating in any of the Working Groups highlighted above, or any of DIF's other Working Groups, please click join DIF.

🌎 DIF Special Interest Group Updates
DIF Hospitality & Travel SIG

The H&T group featured presentations from The Camino Network Foundation, Customer Futures, and Indicio. The group is discussing a Glossary project, as well as formation of a DIF working group for specification development.

Meetings take place weekly on Thursdays at 10am EST. Click here for more details

DIF China SIG

Click here for more details

APAC/ASEAN Discussion Group

The group discussed progress of their healthcare project, the development of a platform for verifying yellow fever vaccination cards, and the introduction of a new system for verifying the legitimacy and identity of businesses and individuals. They discussed the concept of a foundational identity and the importance of government involvement.

This group is seeking more participants in their calls, so please join!

The DIF APAC call takes place Monthly on the 4th Thursday of the month. Please see the DIF calendar for updated timing.

DIF Africa SIG

The DIF Africa SIG discussed DID:UNCONF Africa and plan to aggregate and publish the Book of Proceedings from the DID:UNCONF sessions on the event website.

Meetings take place Monthly on the 3rd Wednesday at 1pm SAST. Click here for more details

DIF Japan SIG

Meetings take place on the last Friday of each month 8am JST. Click here for more details

📖 DIF User Group Updates
DIDComm User Group

There are two meeting series to accommodate different time zones, each taking place every Monday except the first week of the month (which is reserved for DIDComm Working Group). Click here for more details.

Veramo User Group

Meetings take place weekly on Thursdays, alternating between Noon EST / 18.00 CET and 09.00 EST / 15.00 CET. Click here for more details

📢 Announcements at DIF

Conference season is kicking into high gear. Explore our Events calendar to meet the DIF community at leading Decentralized Identity, Identity, and Decentralized Web events.

🗓️ ️DIF Members Dr. Carsten Stöcker Appointed DIF Ambassador

DIF has appointed Dr. Carsten Stöcker, founder and CEO of Spherity GmbH, as DIF Ambassador. Dr. Stöcker brings extensive experience implementing decentralized identity across industrial ecosystems, with expertise in Verifiable Digital Product Passports for regulated industries and bridging European Digital Identity initiatives with Industry 4.0 applications.

Announcing Dr. Carsten Stöcker as DIF Ambassador Announcing Dr. Carsten Stöcker as DIF Ambassador Decentralized Identity Foundation - BlogFoundation

👉Are you a DIF member with news to share? Email us at communication@identity.foundation with details.

🆔 Join DIF!

If you would like to get in touch with us or become a member of the DIF community:

Join DIF: https://identity.foundation/join/ Visit our website to learn more Follow our channels:

Follow us on Twitter/X

Join us on GitHub

Subscribe on YouTube

🔍

Read the DIF blog

New Member Orientations

If you are new to DIF join us for our upcoming new member orientations. Find more information on DIF’s slack or contact us at community@identity.foundation if you need more information.

Wednesday, 05. March 2025

Hyperledger Foundation

Blockchain Pioneers: Hyperledger Explorer

As Hyperledger Foundation laid out in the Helping a Community Grow by Pruning Inactive Projectspost, there is an important life cycle to well governed open source projects. Through the evolution of the market, Hyperledger Foundation and, now, LF Decentralized Trust has been the home to a growing ecosystem of blockchain, identity, and related projects. 

As Hyperledger Foundation laid out in the Helping a Community Grow by Pruning Inactive Projectspost, there is an important life cycle to well governed open source projects. Through the evolution of the market, Hyperledger Foundation and, now, LF Decentralized Trust has been the home to a growing ecosystem of blockchain, identity, and related projects. 


Internet Safety Labs (Me2B)

Global Shield Online Conference: Measuring Unavoidable Risks in Technology [VIDEO]

Our Executive Director, Lisa LeVasseur, gave a presentation at this year’s Global Shield Online Conference on “Measuring Unavoidable Risks in Technology”. The entirety of the presentation can be viewed in the video below: The post Global Shield Online Conference: Measuring Unavoidable Risks in Technology [VIDEO] appeared first on Internet Safety Labs.

Our Executive Director, Lisa LeVasseur, gave a presentation at this year’s Global Shield Online Conference on “Measuring Unavoidable Risks in Technology”. The entirety of the presentation can be viewed in the video below:

The post Global Shield Online Conference: Measuring Unavoidable Risks in Technology [VIDEO] appeared first on Internet Safety Labs.

Tuesday, 04. March 2025

EdgeSecure

Learning Machines: Crafting a Future-Ready Cybersecurity Strategy for Higher Education

The post Learning Machines: Crafting a Future-Ready Cybersecurity Strategy for Higher Education appeared first on NJEdge Inc.

Digital ID for Canadians

Outlier Becomes First Canadian Firm Accredited as a DIACC Auditor 

Toronto, March 4, 2025 – DIACC proudly announces that Outlier Solutions Inc. (Outlier Compliance Group) has achieved accreditation as an official auditor. This milestone underscores…

Toronto, March 4, 2025 – DIACC proudly announces that Outlier Solutions Inc. (Outlier Compliance Group) has achieved accreditation as an official auditor. This milestone underscores Outlier’s commitment to enhancing the safety and consistency of Canada’s digital identity ecosystem.

Outlier underwent a thorough evaluation based on ISO 17020 requirements as part of a rigorous accreditation process. This demonstrated its qualifications to conduct comprehensive Pan-Canadian Trust Framework™ (PCTF) audits. This accreditation empowers Outlier to deliver independent and impartial assessments, which are crucial for maintaining the integrity and trustworthiness of digital identity solutions. 

“Independent, high-quality assessments are essential to building a secure and trustworthy digital identity ecosystem,” said Joni Brennan, DIACC President. “Outlier’s accreditation as a DIACC auditor marks an important step toward ensuring that digital identity solutions meet rigorous, transparent standards. Their expertise will help organizations demonstrate compliance while strengthening confidence in the digital economy.”

By providing reliable third-party assessments, Outlier promotes a culture of compliance across the digital space. This reflects its dedication to fostering transparency and creating a robust digital ecosystem. Outlier’s commitment to high-quality, impartial assessments aligns with DIACC’s mission to enhance confidence in digital services while tackling emerging risks associated with fraud and misinformation.

“Outlier Compliance Group is thrilled to become the first Canadian compliance consulting firm accredited as a DIACC auditor. This momentous step aligns with our dedication to providing top-tier compliance-related services and highlights our commitment to upholding the highest standards (as opposed to check-the-box compliance)”, said David Vijan, Co-Founder and CEO of Outlier. “We believe certifications such as the PCTF will shape the future of Canada’s digital identity ecosystem and foster greater trust, protection, innovation, and industry collaboration.” 

For more information about DIACC certification, which enhances trust and security in the digital landscape, please email us at voila@diacc.ca 

About Outlier Compliance Group

Founded in 2013, Outlier is a Canadian-born, boutique compliance consulting firm helping companies in both established and emerging sectors navigate increasingly complex Canadian regulatory requirements.

Outlier core staff members have over ten thousand hours of deep industry experience in heavily regulated industries working as in-house compliance practitioners before moving into the world of consulting.

Outlier believes that good compliance is good business.

About DIACC 

Founded in 2012, DIACC is a non-profit organization that unites public and private sector members to enhance participation in the global digital economy by leveraging digital trust services. By promoting vital design principles and PCTF adoption, DIACC champions privacy, security, and people-first design approaches. For more information, please visit https://diacc.ca


DIF Blog

Welcoming Creator Assertions Working Group to DIF

DIF is excited to welcome the Creator Assertions Working Group (CAWG) as a new working group of DIF. About CAWG CAWG builds upon the work of the Coalition for Content Provenance and Authenticity (C2PA) by defining additional assertions that allow content creators to express individual and organizational intent about their

DIF is excited to welcome the Creator Assertions Working Group (CAWG) as a new working group of DIF.

About CAWG

CAWG builds upon the work of the Coalition for Content Provenance and Authenticity (C2PA) by defining additional assertions that allow content creators to express individual and organizational intent about their content. CAWG defines an identity assertion that allows content creators to bind their online identity to the content that the content that they produce.

A Collaboration with ToIP and CAWG

This new working group will be a joint collaboration with members of DIF, Trust Over IP Foundation (ToIP), and the existing members of CAWG welcomed to participate. And of course, new members are welcome to join.

This group will be chaired by Eric Scouten, Identity Standards Architect at Adobe, and additional co-chairs to be named soon.

Get Involved

CAWG meetings will be held every other week starting Monday, 10 March at the following times:

Americas / European Time Zones: 8am PST / 1500 UTC APAC Time Zones: 6pm PST / 0100 Tuesday UTC

To get involved, see https://cawg.io/

Monday, 03. March 2025

Hyperledger Foundation

Full Linea RPC APIs Support in Web3j

Web3j, an LF Decentralized Trust project, is a lightweight, highly modular Java library for integrating Ethereum-based applications. It allows developers to interact with Ethereum smart contracts, transactions, and accounts seamlessly. With growing adoption of Layer 2 solutions, Web3j has expanded its support for Layer 2 EVM-compatible chains like Linea, enabling developers to leverage

Web3j, an LF Decentralized Trust project, is a lightweight, highly modular Java library for integrating Ethereum-based applications. It allows developers to interact with Ethereum smart contracts, transactions, and accounts seamlessly. With growing adoption of Layer 2 solutions, Web3j has expanded its support for Layer 2 EVM-compatible chains like Linea, enabling developers to leverage cost-efficient and scalable blockchain interactions.


EdgeSecure

Learning Machines: Crafting a Future-Ready Cybersecurity Strategy for Higher Education

The post Learning Machines: Crafting a Future-Ready Cybersecurity Strategy for Higher Education appeared first on NJEdge Inc.

Webinar
February 27, 2025
10:00 AM ET

Discover how innovative machine learning techniques can transform your institution’s cybersecurity approach, empowering IT leaders to proactively safeguard critical data and assets in an ever-evolving digital landscape.

Complete the Form Below to Access Webinar Recording [contact-form-7]

The post Learning Machines: Crafting a Future-Ready Cybersecurity Strategy for Higher Education appeared first on NJEdge Inc.

Friday, 28. February 2025

Internet Safety Labs (Me2B)

Data Broker Presence in 2022 US K-12 Benchmark Apps

1. Overview In 2024, Internet Safety Labs (ISL) added 3rd parties observed in app network traffic to our app Safety Labels viewable in AppMicroscope.org. Recently, we reviewed the network traffic of the original 1541 apps, looking for data brokers and the results were clear: 16% apps that were recommended or required in schools were sending […] The post Data Broker Presence in 2022 US K-12 Bench
1. Overview

In 2024, Internet Safety Labs (ISL) added 3rd parties observed in app network traffic to our app Safety Labels viewable in AppMicroscope.org. Recently, we reviewed the network traffic of the original 1541 apps, looking for data brokers and the results were clear: 16% apps that were recommended or required in schools were sending student data to registered data brokers. Every state (and District of Columbia) had at least three or more schools with apps communicating with data brokers. In total, 442 of the 663 (or 67%) studied schools had apps with data broker traffic.  

Importantly, “registered” data brokers don’t count apps sending data to platforms destined for data brokers, nor does it include entities that should be registered data brokers but aren’t. 

This report details our findings and analysis on Edtech apps observed communicating with data brokers, as well as our recommendations for educators and app developers.  

1.1 The Inadequacy of “Data Broker” Legal Definitions 

This analysis counts only registered data brokers found in either the California Data Broker Registry or the Vermont Data Broker Registry.1 Readers should be aware that the legal definition of “data broker” in the US fails to properly account for and hold responsible the full data supply chain feeding data brokers. Specifically, it fails to include: 

First parties who sell personal information, such as mobile carriers who were found as recently as last year to be selling some of the most sensitive of personal information, location data2.  Entities who sell or share personal in bulk for marketing and advertising purposes, including identity resolution platforms (IDRPs) and  customer data platforms (CDPs), both designed to ingest and synthesize personal data from a multitude of services/platforms
a. This also includes adtech entities including Supply Side Platforms (SSPs), ad exchanges, Demand Side Platforms (DSPs), and Data Management Platforms (DMPs). These entities aggregate personal data shared via the real-time bidding (RTB) messages (aka “bidstream”). Note that these entities figured prominently in the recent Gravy Analytics analysis3.  

Thus, we must assume that the volume of student data making its way into data brokers is substantially larger than this analysis conveys. 

1.2 Methodology 

In 2022, ISL conducted a privacy audit on recommended and required technologies for students in a representative sample of K-12 schools across the US. In total, ISL examined 1541 mobile apps, including analysis of the network traffic between the app, the first party, and all third-party servers.   

Next, ISL researchers determined the corporate owner of every subdomain that appeared in the network traffic collected for the apps.   

Finally, we determined if the corporate owner was a registered data broker by matching against companies in the California and Vermont data broker registries; note that this reflects data broker registries as of 2024.   

2. Findings
2.1   Overall

243 apps or 15.8% of tested apps sent data to registered data brokers. The 243 apps sending data to data brokers communicated with a shocking 6.7 data brokers on average. This means that when children use these apps, their information will be sent to several data brokers.  

The top app categories sending data to data brokers were: 

News apps (77% of audited news apps)  Reference apps (37%)  Sports apps (32%), and   Community Engagement Platform (CEP) apps (26%).  

News, reference and sports apps are not surprising; news apps are known to be rife with adtech and martech. See Figure 7 for all category counts. 

As noted in all three previously published findings reports, Community Engagement Platform apps were among the leakiest apps observed. The CEP developers with apps found to be communicating with data brokers are shown in Table 1.

Table 1: CEP app developers with app accounts CEP App Developer total # of apps # apps with data broker traffic % with data brokers  Apptegy  129  16  12%  Filament Essential Services  5  1  20%  Finalsite  122  42  34%  Focus School Software  6  1  17%  From NowOn  4  1  25%  Heather Hanks  2  2  100%  Intrado Corporation  42  8  19%  Mascot Media  10  2  20%  SchooInfoApp  14  2  14%  SchoolPointe  8  2  25%  Straxis  2  1  50% 


Of the larger CEP developers (Apptegy, Finalsite, and Intrado), it’s clear that the apps’ configurability is influencing the presence of data brokers since not all of the apps were found to be communicating with data brokers. Finalsite, for example, provides an administrative dashboard that allows school districts to edit the URLs opened by the app. In 2021, ISL spoke with Blackboard (Anthology), then-owner of the Finalsite apps, and learned that the platform performed no checking on the domains entered by school administrators. We suggested that they add guardrails, checking for things like dangling or malicious domains, at a bare minimum. Finalsite acquired Anthology from Blackboard in September 2022.  

The Palm Beach County School District Android app (a CEP app) by Intrado included the most data brokers, at a whopping 31. (See also the Safety Label for the app here: https://appmicroscope.org/app/1579/) The app is no longer available on the Google Play store. 

Table 2 shows the five apps with the most data brokers from 2022 and from a recent retesting. Two of the apps have been removed from the store, but the other three are the same or worse with respect to the number of data brokers. 

Table 2: Top Five Apps – Most Data Brokers App Name  Developer  # of Data Brokers (2022)  # of Data Brokers (2025)  Palm Beach County School District (Android)  Intrado Corp.  31  App removed from store SBLive Sports (Android)  SB Live Sports  27  28  AllSides – Balanced News (iOS)  AllSides  27  28  Montgomery Public Schools (Android)  Finalsite  27  App removed from store Westover Christian Academy (Android)  Apptegy  25  34 
2.2   EdTech 

As discussed in Findings Report 1, the majority of apps from the benchmark weren’t strictly edtech apps; the benchmark included a surprising number of non-edtech, general use apps. Isolating edtech categories, we find that only 6 apps (2.0%) of the strictly edtech apps had observed traffic to data brokers. While this is substantially better than the overall sample rate, for these kinds of services, there should be no data brokers receiving data from the apps.  

Table 3: Edtech apps with data broker traffic    Classroom Messaging Software
(n = 30)  Digital Learning Platform
(n = 27)  Safety Platform
(n = 67)  School Management Software
(n = 61)  Single Sign On
(n = 5)  Student Information System
(n = 47)  Study Tools
(n = 28)  Virtual Classroom Software
(n = 12)  Grand Total
(n = 296)  # apps with data broker traffic  1  3.3%  0  0.0%  0  0.0%  3  4.9%  0  0.0%  1  2.1%  0  0.0%  1  8.3%  6  2.0% 

 

The following are the EdTech apps communicating with data brokers:

Classroom Messaging Software apps:  FAMILIES | TalkingPoints (iOS) School Management Software apps: Choicelunch (Android)  Choicelunch (iOS)  WebMenus by ISITE Software (Android)  Student Information System  k12 (Android)  Virtual Classroom Software  ZOOM Cloud Meetings (Android) 2.3   Most Common Data Brokers

The three most frequently observed data brokers in the network traffic were PubMatic, LiveRamp, and Magnite (Table 4).  

Table 4: Top 25 Data Brokers Found in Network Traffic  Data Broker  # Apps in K12 Benchmark  PubMatic  110  LiveRamp  100  Magnite  98  Lotame  78  OpenX  78  Freewheel  76  Taboola  72  Oracle  71  Nielsen Marketing  69  Tapad  65  LiveIntent  59  ID5  58  Neustar  57  PulsePoint  50  Outbrain  45  StackAdapt  45  Merkle Marketing  42  Media.net  41  Intent IQ  38  33Across  29  Wunderkind  29  BounceX  24  GumGum  24  Zeta Global  22  Bombora  21 
2.4   State-based Observations

Data brokers were found in apps in every state and the District of Columbia. That is, every state sample of 13 schools had at least one school with at least one app with data broker traffic. Figure 1 shows how many schools from the 2022 benchmark had apps that were sending traffic to data brokers. 13 schools were sampled in each state, so the heatmap reflects up to 100% (i.e. all 13 schools) having apps with traffic to data brokers. Texas, Wisconsin and Louisiana each had apps with data brokers in all 13 studied schools.

Figure 1: Number of schools in state sample with at least one app with data broker traffic (13 schools max per state)

Figure 2 shows the total number of apps with data broker traffic for each state sample of 13 schools. The states with the most apps with data broker traffic were Maryland, Kansas, and Minnesota. 

Figure 2: Total number of apps with data broker traffic

We hypothesize that the likelihood of apps with data broker traffic is mainly related to the sampled schools’ propensity to recommend a higher number of technologies to students. The correlation between the number apps with data brokers and the total number of apps was moderately strong at .69. Figure 3, the heatmap showing the total number of apps recommended by the 13 sampled schools in each state, indeed shows similarities (Texas, Minnesota, Wisconsin and Maryland, in particular).  

Figure 3: Total number of apps per state

We were interested to see if there was any obvious correlation between state privacy laws and the number of data brokers observed. Figure 4 shows states with student data privacy laws. Indeed, three of the nine states that don’t have student data laws, Minnesota, Wisconsin, and Maryland, each had high numbers of apps with data broker traffic, and all 13 schools in Wisconsin had apps with data broker traffic. While inconclusive with respect to causation, the correlation warrants future study. It’s also possible that the absence of a state student data privacy law encourages a higher number of technologies being recommended to students in schools.  

Figure 4: States with student data privacy laws
https://studentprivacycompass.org/state-laws/

There was no obvious correlation between states with children’s privacy laws and the number of apps with data broker traffic (Figure 5). 

Figure 5: States with children’s privacy laws
https://www.huschblackwell.com/2024-state-childrens-privacy-law-tracker

3. Recommendations 1) App developers: Apps or websites used by children should never send data to data brokers. They should also not send user data to customer data repositories a) Community Engagement Platform App Developers ISL is calling on the community engagement app developers shown in Table 1 to immediately update all of their apps to remove all data brokers. We also call upon CEP app developers to install better guardrails in the administrative portal, minimally performing automated checking for dangling and malicious domains. Ideally, also disallowing or flagging any commercial sites with trackers (like MaxPreps), alerting school administrators of the risks of such sites with respect to student data sharing. ISL recommends that schools not use CEP apps until they have demonstrated significant improvement in dangerous data sharing. 2) Schools, Educators, and Concerned Parents: While it’s 100% the responsibility of the app developer to ensure that their apps and websites are safe for children, schools may need to be the ones demanding removal of data brokers. To help you do this, we’ve updated our app safety labels to clearly identify the number of data brokers in the app (Figure 6). Educators, school IT personnel and concerned parents can look up the app safety label for any app that they’re recommending to students. If it includes data brokers, stay away from the app.             Can’t find your app? Contact us and we’ll be happy to audit a new app or re-audit an existing app.

Figure 6: Updated app safety label https://appmicroscope.org/app/1579/

Figure 7: Apps with data broker traffic by app category

 

Table 3: Apps with Data Broker Traffic by State State # of schools in state with at least one app with data broker traffic (13 schools sampled per state) % of schools in state using at least one app with data broker traffic total # apps with data broker traffic total # unique apps State children’s privacy law? State Student Data Privacy Law? Alabama 10 77% 22 15 Alaska 3 23% 5 5 Arizona 6 46% 8 7 Y Arkansas 11 85% 21 13 Y California 9 69% 14 12 Y Y Colorado 5 38% 7 5 Y Y Connecticut 12 92% 30 17 Y Y Delaware 11 85% 28 13 Y Washington, D.C. 9 69% 18 6 Y Florida 9 69% 36 23 Y Y Georgia 12 92% 30 16 Y Hawaii 4 31% 6 5 Y Idaho 7 54% 10 6 Y Illinois 9 69% 24 14 Indiana 9 69% 25 13 P Y Iowa 5 38% 18 18 Y Kansas 12 92% 39 24 Y Kentucky 8 62% 21 16 Y Louisiana 13 100% 27 9 Y Maine 11 85% 26 14 Y Maryland 11 85% 47 28 Y Massachusetts 11 85% 27 10 Y Michigan 9 69% 21 11 Y Minnesota 9 69% 39 23 Mississippi 10 77% 24 17 Y Missouri 8 62% 34 23 Y Montana 10 77% 17 12 Y Nebraska 9 69% 20 17 Y Nevada 4 31% 7 6 Y New Hampshire 11 85% 18 5 Y New Jersey 9 69% 17 11 New Mexico 3 23% 3 3 Y New York 6 46% 12 6 Y Y North Carolina 7 54% 9 4 Y North Dakota 8 62% 24 19 Ohio 7 54% 14 10 Y Oklahoma 10 77% 26 12 Y Oregon 5 38% 6 3 Pennsylvania 8 62% 11 8 Y Rhode Island 12 92% 29 10 Y South Carolina 10 77% 15 4 Y South Dakota 8 62% 23 17 Y Tennessee 11 85% 28 16 P Y Texas 13 100% 34 15 Y Utah 7 54% 9 5 Y Y Vermont 11 85% 15 6 Y Virginia 8 62% 18 13 Y Y Washington 3 23% 4 4 Y West Virginia 9 69% 18 14 Y Wisconsin 13 100% 36 15 Wyoming 7 54% 9 5 Y

 

Footnotes: ISL is updating the database with both Texas and Oregon data broker registries.   https://www.fcc.gov/document/fcc-fines-largest-wireless-carriers-sharing-location-data.  https://www.wired.com/story/gravy-location-data-app-leak-rtb/.

The post Data Broker Presence in 2022 US K-12 Benchmark Apps appeared first on Internet Safety Labs.


We Are Open co-op

Exploring the Dimensions of AI Literacies with Dr Angela Gunder

Join us for two upcoming events as we help map the contours of the landscape Image created by Laura Hilliger based on the work of Angela Gunder, remixed from work by Doug Belshaw 🤘 What is “digital literacy”? That was the topic of my doctoral thesis, which argued for a plurality of literacies, contextually applied from eight “essential elements.” When submitting my thesis, I insisted on using
Join us for two upcoming events as we help map the contours of the landscape Image created by Laura Hilliger based on the work of Angela Gunder, remixed from work by Doug Belshaw 🤘

What is “digital literacy”? That was the topic of my doctoral thesis, which argued for a plurality of literacies, contextually applied from eight “essential elements.” When submitting my thesis, I insisted on using the least restrictive license possible (CC0), as I wanted to encourage one of its central tenets: remix.

That’s why I’m delighted WAO is embarking on a collaboration with Dr Angela Gunder of Opened Culture and her work on the Dimensions of AI Literacies. Angela’s thoughtful remix and extension of my work:

“Rather than viewing literacy as a singular skill, the Dimensions of AI Literacies embrace a pluralistic perspective, recognizing that individuals can develop and apply a range of interconnected competencies. AI literacies encompass the skills needed to comprehend, utilize, and critically evaluate AI within complex environments. As educators, learners, and leaders engage with AI, they must navigate both the technical aspects of AI tools and the ethical, cultural, and social implications of their use. AI literacies enable individuals to make informed decisions, adapt AI tools to different contexts, and ensure that applications of AI in academic settings are ethical and inclusive.” (Opened Culture)

We have two upcoming (and free!) online events. The first is part of Open Education Week and the second forms part of (US) National AI Literacy Day. We warmly invite you to join us for either or both of these one-hour sessions, where we will be joined by special guests as we map out the contours of the AI Literacy landscape.

We’re delighted to be working with our friends at Reclaim TV for these events. Reclaim Hosting has long been a friend to Open Education, and we’re delighted to be using open technologies to stream out our sessions!

AI Literacies for Open Educators: An Exploration Date: Friday 7th March 2025 Time: 15:00 UTC (what time is that for me?) Registration link: https://lu.ma/mak647t1

​Artificial intelligence is rapidly transforming education, bringing both opportunities and challenges for open educators. This interactive workshop, hosted by Dr Doug Belshaw and Dr Angela Gunder, explores a plurality of “AI Literacies” for understanding and applying AI within open education.

​Drawing from the recently-published paper AI Literacies and the Advancement of Opened Culture: Global Perspectives and Practices, the session will provide a structured introduction to AI literacies. The hosts will then guide participants through a discussion around the ethical, technical, and pedagogical implications for open educators using generative AI.

​We’re delighted that Dr Genevieve Smith-Nunes as well as other guests to be announced shortly.

​Participants will be encouraged to add any relevant openly-licensed resources to an open access repository.

​Key takeaways for participants ​A practical understanding of AI literacies as they apply to open education. ​An approach to help co-create definitions and frameworks relating to AI literacies in specific contexts. ​Access to an open repository of AI and open education resources. Dimensions of AI Literacies: An Introduction Date: Friday 28th March 2025 Time: 15:00 UTC (what time is that for me?) Registration link: https://lu.ma/lm657f0p

Artificial intelligence is rapidly transforming every facet of education, both for learners, educators, and leaders alike. In this time of great change, how can we equip our learners (and ourselves) with the skills necessary to navigate and influence the evolving landscape of artificial intelligence?

​Join Dr. Angela Gunder (Opened Culture) and Dr. Doug Belshaw (We Are Open) in this free, interactive workshop as they unveil the multifaceted and transformative world of “AI Literacies.”

​Recognizing that literacy is not merely an on/off switch of literacy vs. illiteracy, but a spectrum of interconnected and evolving skills, this online session will introduce educators of all levels and contexts to eight different dimensions of competencies, skills, and mindsets that comprise AI literacies.

​Pulling from findings from a commissioned research study by UNESCO IITE and Shanghai Open University, the virtual workshop will provide a foundational understanding of AI literacies, emphasizing their critical role in education today. Dr. Gunder and Dr. Belshaw will explore the critical, creative, and communicative dimensions of using generative AI in classrooms, guiding participants through the implications and opportunities it presents.

​This workshop is designed to engage educators at all levels, particularly those new to the concept of AI literacies, in developing strategies to help their students succeed in an AI-enabled world.

​Key takeaways for participants ​Unpacking AI literacies as a dynamic set of skills vital for both educators and students. ​Developing strategies to integrate and teach AI literacies across a wide variety of educational settings. ​Gaining access to a new open course in D2L Brightspace highlighting the application of AI literacies. ​Participating in an open community sharing and exploring how AI literacies manifest in educational practices worldwide.

​Embark on this journey to discover how AI can be harnessed to foster a future-ready generation.

Are you looking for some help with your own AI Literacy initiatives? Please check out AILiteracy.fyi and get in touch for more bespoke help in this area!

Exploring the Dimensions of AI Literacies with Dr Angela Gunder was originally published in We Are Open Co-op on Medium, where people are continuing the conversation by highlighting and responding to this story.

Thursday, 27. February 2025

OpenID

OPIN joins OIDF Board via Peers Consulting + Technology

The OpenID Foundation is delighted to announce that Open Insurance Brazil (OPIN) will be joining the OpenID Foundation Board of Directors as a Sustaining Member, via their strategic partnership with Peers Consulting + Technology. Francisco Leme, Chief Technology and Operations Officer for OPIN and representative for Peers Consulting, will represent them on the Board.   Open […] The pos

The OpenID Foundation is delighted to announce that Open Insurance Brazil (OPIN) will be joining the OpenID Foundation Board of Directors as a Sustaining Member, via their strategic partnership with Peers Consulting + Technology. Francisco Leme, Chief Technology and Operations Officer for OPIN and representative for Peers Consulting, will represent them on the Board.  

Open Insurance Brazil (OPIN) is a major initiative in Brazil, which came into being in 2021 to ensure the standardization of insurance data and service sharing. The OPIN project is managed by Peers Consulting in partnership with ecosystem participants, in accordance with the regulations established by Superintendence of Private Insurance (SUSEP).

OPIN had already been working closely with the OpenID Foundation as a valued ecosystem partner, since it selected and mandated the OpenID Foundation’s FAPI standard and self-certification capabilities into its ecosystem. The decision to join the Board is a well received evolution of this strategic relationship with mutual benefits. It raises the profile of the insurance sector, Latin American requirements, and ecosystem experience to the OpenID Foundation leadership. 

OPIN follows in the footsteps of its domestic peer, Open Finance Brazil (represented on the OpenID Foundation’s Board by Chicago Advisory Partners), a Latin American trailblazer in adopting and mandating the OpenID Foundation’s FAPI standard and FAPI certification for the Brazilian financial services vertical. 

Commenting on the appointment, OpenID Foundation’s Executive Director Gail Hodges said: “OPIN and Peers Consulting have already achieved a great result as the first open insurance ecosystem in the world to implement FAPI, and the OpenID Foundation has been proud to support them on their journey.

“We have no doubt that OPIN, Peers and Francisco himself will have a positive impact on our work to scale the adoption of open standards within Brazil, Latin America, and beyond.

“Their decision to join the OpenID Foundation is very timely as more countries increase their focus on open data requirements. According to the Cambridge Centre for Alternative Finance, there are now 95 jurisdictions that have open data legislation, regulation or guidance, and are either in the development stage, moving through the process of planning or have been passed. We welcome the opportunity to work with them all.”

Francisco Leme added: “I’m very thrilled to be part of the Board of Directors of the OpenID Foundation, representing Open Insurance Brazil. We are now even more together, combining our expertise and knowledge for the benefit of all insurance companies.”

About the OpenID Foundation

The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.   

About Peers Consulting + Technology

Peers Consulting & Technology is the fastest-growing business and technology consulting firm in Latin America. Its work is focused on business and digital journeys, offering customized approaches that range from strategic analysis to implementation.

Certified by institutions such as FIA, ISG, Financial Times, Glassdoor, and Great Place to Work, it boasts a highly qualified multidisciplinary team. With more than 300 professionals, the company works closely and inquisitively to tackle the challenges faced by large national and international companies, as well as leading organizations in the nonprofit sector.

A partner in building the future of its clients, Peers Consulting has expertise in a wide range of areas and industries, including banking and finance, supply chain, digital, organizational strategy, M&A, healthcare, insurance, education, ESG, and more. The firm collaborates with industry-leading companies such as C&A, Alpargatas, Grupo Boticário, Porto Seguro, and companies recently acquired by the largest private equity funds operating in the country.

The post OPIN joins OIDF Board via Peers Consulting + Technology first appeared on OpenID Foundation.


Velocity Network

Jim Owens Re-Elected as Chairman of the Board

We're delighted that National Student Clearinghouse's Chris Goodson has been voted onto the Velocity Network Foundation Board of Directors. The post Jim Owens Re-Elected as Chairman of the Board appeared first on Velocity.

Wednesday, 26. February 2025

Digital Identity NZ

More momentum, greater choice – Digital Trust is getting real

2025 is well underway with a healthy mix of activity, excitement and opportunity, Digital Trust included. The post More momentum, greater choice – Digital Trust is getting real appeared first on Digital Identity New Zealand.

Kia ora,

2025 is well underway with a healthy mix of activity, excitement and opportunity, Digital Trust included.

We are seeing a major shift in this space as acceptance networks drive up digital identity adoption at the expense of ‘great in theory, hard to implement’ ideas like SSI (Self-Sovereign Identity), or the (hotly debated) need for digital identity itself

Member News

The final list of participants in Australia’s Age Assurance trial were recently announced and it’s great to see Digital Identity NZ members General Identity Protocol, GBG and MyMahi amongst them!

It’s terrific to welcome CentrifAI Ltd and welcome back Westpac, making it a ‘big four’ clean sweep, plus the Co-operative Bank (the only NZ owned bank member). See all member organisations here.

DINZ has available one Corporate seat and one further seat on the Executive Council to help with our diversity, equity and inclusion policy. If your organisation is not a member, join and lead.

Free access to InformDI’s 7 Chapters of DISTF, sponsored by NEC and DINZ, ends on 31 March. Register now and join the 100+ that have already taken the course. Want to sponsor? Get in touch

What’s Happening Around the Globe  

Australia: Banks are driving easy digital identity adoption across the ditch. UK: Government has announced plans for its own digital wallet to mainly hold mDLs (Mobile Driver’s License) and similar government held official data, according to industry comment.  NZ: Global media pick up NZ news too (hat tip to Biometric Update and Inidsol UK).


DINZ Updates

DINZ joined the Minister, FinTechNZ, agencies, and industry participants at the fourth FinTech Innovation Roundtable as we seek to remove obstacles to competition and increase service choice for New Zealanders. While Chatham House Rules apply to proceedings, DINZ has published the paper distributed before and tabled at the meeting addressing two digital identity related challenges.  Protecting people from online harm is at the heart of everything we do. So, if you attended Andrew Hughes’ DNZ webinar on Deepfakes and ID Verification last month, you’ll be interested in this post as well as mDL (Mobile Driver’s License) with DINZ members Visa and MATTR highlighted here.      If you missed DINZ’s webinar with Payments NZ on its strategic paper currently out for consultation, ‘Payments for the next generation’, you can watch the recording here.


A Personal Note

As members already know (and later from social media), after three and a half years as DINZ’s longest serving Executive Director, I’m stepping down in the next couple of months to move towards my original intention when returning from Europe – more downtime alongside some digital identity advisory that uses my 20+ years of knowledge and experience in this space. As my time as Executive Director comes to an end, DINZ Executive Council welcomes your suggestions on the following by getting in touch

 1. What Digital Trust related project ideas you have in mind to lead this year so that it can be considered in DINZ’s plans. Sponsor that work in DINZ or sponsor projects remaining from 2024.

2. Are you (or do you know of anyone who would be) interested in the role of Executive Director of Digital Identity NZ? Get in touch


Upcoming Events

4 March, 10am: DINZ Virtual Coffee Chat. Bring your digital identity ideation along to discuss.  4 March, 12pm: Worldline, hosted by DINZ, will reveal how it is revolutionising New Zealand’s payment landscape by integrating digital identity credentials into daily transactions. Register here.
  25 March, 12pm: Save the Date Now! ‘Meet the DISTF evaluators’ (those that are DINZ members at least). Considering DISTF accreditation? This is unmissable. Registration is coming soon. 8 April, Identity Management Day 2025: Up for an early start? Join IDSA 12pm – 9am NZ

DIA Training Schedule Confirmed

Identification management plays a core role in our work, and members should have a foundational level of understanding about what it is and how it impacts your customers. Good identification management helps reduce and/or prevent fraud, loss of privacy and identity theft, by applying good practices and processes. 

Topics covered in DIA Training Courses:

Identification Essentials (G1) Entity Information – Names and other Information (G2) Introduction to Identification Standards (G3) Biometrics 101 (G4)

FREE online learning at your own pace. Learn more. 

For any enquiries relating to Identification Management training email identity@dia.govt.nz

Ngā mihi,
Colin Wallis
Executive Director, Digital Identity NZ

Member news, global trends, upcoming events and more.
Read full news here: More momentum, greater choice – Digital Trust is getting real

SUBSCRIBE FOR MORE

The post More momentum, greater choice – Digital Trust is getting real appeared first on Digital Identity New Zealand.


FIDO Alliance

The State of Passkey Deployment in the Enterprise: A Snapshot of Passkey Deployments for Employee Sign-ins in the U.S. and UK

A Snapshot of Passkey Deployments for Employee Sign-ins in the U.S. and UK Key findings Registrants can watch the webinar on demand.

A Snapshot of Passkey Deployments for Employee Sign-ins in the U.S. and UK

Key findings Enterprises understand the value of passkeys and the majority are rolling out passkeys for workforce sign-ins: The majority have either deployed or are in the midst of deploying passkeys with goals tied to improved user experience, enhanced security, and standards/regulatory compliance. Those that are deploying are rolling out a mix of device-bound and synced passkeys.  Enterprises are prioritizing passkey rollouts to users with access to sensitive data and applications, and are leveraging communication, training and documentation to increase adoption. Enterprises are reporting significant security and business benefits after rolling out passkeys: they report positive impacts on user experience, security, cost reduction, productivity and digital transformation goals — and are seeing declines in usage of legacy authentication methods. Interestingly, these benefits directly correlate with what businesses who aren’t yet using passkeys dislike most about their current authentication methods: that they can be compromised, are costly, and difficult to use.   Organizations that do not have active passkey projects cite complexity, costs and overall lack of clarity about implementation as reasons, signaling a need for increased education to enterprises on rollout strategies to reduce concerns. Read the Full Report Read the Press Release

Registrants can watch the webinar on demand.

Watch On Demand

New FIDO Alliance Research Shows 87% of U.S. and UK Workforces are Deploying Passkeys for Employee Sign-ins

Respondents report positive impacts on user experience, security, productivity, and cost reduction from deploying a mix of device-bound and synced passkeys February 26, 2025 — The FIDO Alliance along with […]

Respondents report positive impacts on user experience, security, productivity, and cost reduction from deploying a mix of device-bound and synced passkeys

February 26, 2025 — The FIDO Alliance along with underwriters Axiad, HID, and Thales today released its State of Passkey Deployment in the Enterprise report, finding that 87% of surveyed companies have, or are in the midst of, rolling out passkeys with goals tied to improved user experience, enhanced security, and compliance. 

The report is the result of an independent survey commissioned in September 2024 by the FIDO Alliance Enterprise Deployment Working Group, with underwriting support from Axiad, HID, and Thales, to understand the state of passkey deployments in the U.S. and UK; the methods used to deploy passkeys and enroll employees; and the perceived barriers to deployment. Read the report at https://fidoalliance.org/research-state-of-passkey-deployment-in-the-enterprise-a-snapshot-of-deployments-employee-sign-ins-us-uk.

The survey revealed four key findings:

Enterprises understand the value of passkeys for workforce sign-ins. A majority of decision makers (87%) report deploying passkeys at their companies. Of these, 47% report rolling out a mix of device-bound passkeys (on physical security keys and/or cards) and synced passkeys (synced securely across the user’s devices). Organizations are prioritizing passkey rollouts to users with access to sensitive data and applications, including the three most commonly cited priority groups: Those requiring access to IP (39%), users with admin accounts (39%) and users at the executive level (34%). Within these deployments, organizations are leveraging communication, training, and documentation to increase adoption. Passkey deployments are linked to significant security and business benefits. Respondents report moderate to strong positive impacts on user experience (82%), security (90%), help-center call reduction (77%), productivity (73%), and digital transformation goals (83%).  Groups that do not have active passkey projects cite complexity (43%), costs (33%), and lack of clarity (29%) about implementation as reasons. This signals a need for increased education for enterprises on rollout strategies to reduce concerns, as there is a correlation between these perceived challenges and the proven benefits of passkeys.

“This study is equally encouraging and illuminating as it points to strong willingness and commitment to deploy passkeys to employees – and also is informative in helping FIDO shape resources that we can deliver to help enterprises around the world more quickly and effectively implement their FIDO authentication strategies,” said Andrew Shikiar, CEO and executive director of the FIDO Alliance. “Passkeys can stop AI-generated social engineering attacks in their tracks while also increasing employee productivity and reducing costs associated with help desk support and security breaches. FIDO Alliance is committed to helping more companies around the world realize these benefits by providing actionable passkey implementation guidance and best practices, which this data will help define.”

New phishing and fraud attempts are being used every day, driven in particular by widespread generative AI use. As reflected in the report, enterprise leaders are becoming aware of the limitations of compromisable passwords, and seeing the value of deploying the most secure and user-friendly authentication methods possible. These insights will be leveraged to further remove the perceived and/or real barriers around passkey adoption so more enterprises can experience their benefits on a global scale. 

Learn More During FIDO’s March 6 Webcast 

The FIDO Alliance will host a webcast on March 6, 2025 at 8am PST to provide further insights into the report methodology, the findings and next steps. The webcast will feature Michael Thelander, senior director of product marketing at Axiad; Katie Björk, director of communications and solution marketing at HID; and Sarah Lefavrais, Authentication devices product marketing director at Thales, along with Megan Shamas, chief marketing officer of the FIDO Alliance. Register here.

Michael Thelander, Axiad’s director of product marketing, thinks the survey results will deliver not just interesting data, but will also provide a path for FIDO2 to become a first class citizen alongside other forms of PKI-based authentication in the enterprise. “Passkey technology has not only matured, but this survey reveals how identity practitioners and strategists are beginning to integrate passkeys with their other workforce authentication methods, across different platforms and device types, to deliver what identity architects and users both want: strong authentication that doesn’t place a ‘friction ‘tax’ on the last step of accessing systems and networks.” 

“HID, in collaboration with fellow FIDO Alliance members, launched this survey to gain insights into the priorities of enterprise and security leaders that drive successful passkey implementation. We also aimed to identify the challenges other organizations encounter when integrating FIDO technology into their authentication strategies. HID’s overarching goal is to empower organizations to meet their business objectives by eliminating one of their most significant obstacles: user experience and security challenges linked to passwords,” says Katie Björk, Director of Communications and Solution Marketing.

“Thales is excited to collaborate with the FIDO Alliance for this research, which underscores the growing adoption of passkeys for employee sign-ins,” said Haider Iqbal, Director Product Marketing IAM at Thales. “We’re seeing similar interest from our customers, who recognize the benefits of FIDO authentication for both security and productivity. Thales is committed to enabling organizations to migrate their workforce and customers to passkeys, helping them stay ahead of the curve with secure, seamless and frictionless digital journeys for all users.”

Survey Methodology:

The survey was conducted among 400 decision makers who would be / are involved in passkey deployment in companies with 500+ employees across the UK and the US. The interviews were conducted online by Sapio Research in September 2024 using an email invitation and an online survey. At an overall level results are accurate to ± 4.9% at 95% confidence limits assuming a result of 50%. The survey was produced by the FIDO Alliance Enterprise Deployment Working Group, with underwriting support from Axiad, HID, and Thales.

About the FIDO Alliance

The FIDO (Fast IDentity Online) Alliance, www.fidoalliance.org, was formed in July 2012 to address the lack of interoperability among strong authentication technologies, and remedy the problems users face with creating and remembering multiple usernames and passwords. The FIDO Alliance is changing the nature of authentication with standards for simpler, stronger authentication that define an open, scalable, interoperable set of mechanisms that reduce reliance on passwords. FIDO Authentication is stronger, private, and easier to use when authenticating to online services.

About Axiad

Axiad is an identity security company whose products make authentication and identity risk management simple, effective and real. Our credential management systems make MFA defensible, manageable and usable. Our cutting-edge risk solutions help customers identify and quantify risk and fortify their systems against a barrage of new attacks. Learn more at www.axiad.com.

About HID

HID powers the trusted identities of the world’s people, places and things. We make it possible for people to transact safely, work productively and travel freely. Our trusted identity solutions give people convenient access to physical and digital places and connect things that can be identified, verified and tracked digitally. Millions of people around the world use HID’s products and services to navigate their everyday lives, and billions of things are connected through HID’s technology. We work with governments, educational institutions, hospitals, financial institutions, industrial businesses and some of the most innovative companies on the planet. Headquartered in Austin, Texas, HID has over 4,500 employees worldwide and operates international offices that support more than 100 countries. HID is an ASSA ABLOY Group brand. For more information, visit www.hidglobal.com.


About Thales Cybersecurity Products

In today’s digital landscape, organizations rely on Thales to protect what matters most – applications, data, identities, and software. Trusted globally, Thales safeguards organizations against cyber threats and secures sensitive information and all paths to it — in the cloud, data centers, and across networks. Thales offers platforms that reduce the risks and complexities of protecting applications, data, identities and software, all aimed at empowering organizations to operate securely in the digital landscape. By leveraging Thales’s solutions, businesses can transition to the cloud with confidence, meet compliance requirements, optimize software usage, and deliver exceptional digital experiences to their users worldwide.

More on Thales Cybersecurity Products: https://cpl.thalesgroup.com/

More on Thales Group: www.thalesgroup.com

Contact
press@fidoalliance.org 


MyData

EUDI wallets: challenges and opportunities in public sector

In the MyData Matters blog series, MyData members introduce innovative solutions that align with MyData principles, emphasising ethical data practices, user and business empowerment, and privacy. Wallets offer opportunities for […]
In the MyData Matters blog series, MyData members introduce innovative solutions that align with MyData principles, emphasising ethical data practices, user and business empowerment, and privacy. Wallets offer opportunities for […]

Tuesday, 25. February 2025

OpenID

Notice of a Security Vulnerability

The OpenID Foundation is committed to maintaining the highest security standards in identity protocols and takes security research seriously. As our specifications move towards final, we engage security researchers to conduct a rigorous security analysis and identify any vulnerabilities in the specifications. During a formal analysis of OpenID Federation, a security vulnerability was discovered re

The OpenID Foundation is committed to maintaining the highest security standards in identity protocols and takes security research seriously. As our specifications move towards final, we engage security researchers to conduct a rigorous security analysis and identify any vulnerabilities in the specifications. During a formal analysis of OpenID Federation, a security vulnerability was discovered relating to ambiguities in the audience values of JWTs sent to authorization servers. This vulnerability also impacts other OpenID specifications and OAuth specifications. Corrective actions have already been taken and incorporated into OpenID Foundation specifications and certification tests to address the potential issue. Corrective actions are under way for the affected OAuth specifications as well.  In parallel, we have been working closely with relevant stakeholders to ensure robust mitigation strategies are in place across the implementer and standards communities.

At this time, we are not aware of any known compromises that occurred resulting from this potential attack vector. Some ecosystems that were previously vulnerable have updated their deployments to address the vulnerability. Our focus is on ensuring that all implementers are well-equipped with the guidance needed to secure their deployments effectively.

Our sincere thanks to the University of Stuttgart security researchers Dr. Ralf Küsters, Tim Würtele, and Pedram Hosseyni for their due diligence that led to the identification of this security vulnerability. This discovery is an example of the value of security analysis, partnerships, and community collaboration.

Further details on this security vulnerability can be found here: https://openid.net/wp-content/uploads/2025/01/OIDF-Responsible-Disclosure-Notice-on-Security-Vulnerability-for-private_key_jwt.pdf. Questions relating to your own implementation can be directed to certification@oidf.org.

The vulnerability has been assigned CVE numbers:

CVE-2025-27370 for OpenID Foundation private_key_jwt as defined in OpenID Connect CVE-2025-27371 for IETF OAuth2 JWT client authentication assertions as defined in RFC 7521/7523 About the OpenID Foundation

The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.

 

The post Notice of a Security Vulnerability first appeared on OpenID Foundation.


OIDF Re-Elected to OpenWallet Foundation’s Board

The OpenID Foundation is delighted to announce its re-election to the OpenWallet Foundation’s (OWF) Board of Directors as an Observer. This position allows the OpenID Foundation to continue representing the OWF’s associate sponsors, which include a diverse array of non-profit, academic, and government entities. For the year ahead, the OpenID Foundation is especially focused on […] The post OIDF

The OpenID Foundation is delighted to announce its re-election to the OpenWallet Foundation’s (OWF) Board of Directors as an Observer. This position allows the OpenID Foundation to continue representing the OWF’s associate sponsors, which include a diverse array of non-profit, academic, and government entities.

For the year ahead, the OpenID Foundation is especially focused on fostering proactive communication amongst the associate members, and ensuring the technical developments of the OWF deliver on the OpenID Foundation’s mission and roadmap for 2025. 

Joseph Heenan, Standards Special and Certification Director at the OpenID Foundation and CTO at Authlete, will once again serve as the associate members’ representative on the Board, with Gail Hodges, Executive Director at the OpenID Foundation, as the alternate representative. Their leadership will help ensure that the OpenID Foundation and other associate members remain actively engaged in key discussions and decisions that shape the future of digital identity.

A commitment to open, secure, and interoperable digital wallets

The OWF is a collaborative, open initiative under the umbrella of the Linux Foundation, headquartered in Brussels, Belgium. It provides a secure environment where developers work together on standards-based open-source components. These components empower issuers, wallet providers, and relying parties to build implementations that prioritize user choice, security, and privacy.

A continued role in shaping the digital identity landscape

This marks the second time the OpenID Foundation has been elected to this role. Since the OWF’s inception in 2023, the OpenID Foundation has been an active contributor, and its continued presence on the Board underscores the value of its expertise. The one-year term allows the OpenID Foundation to participate in all Board meetings, share insights, provide feedback, and influence critical developments in this space.

A key takeaway from this engagement with the OWF is the growing adoption of OpenID Foundation’s standards across OWF projects. The OpenID Foundation is particularly pleased that many OWF initiatives are integrating these standards and participating in the testing of the OpenID Foundation’s conformance tools for the OpenID for Verifiable Credentials family of specifications. This adoption underscores the importance of secure, standardized approaches in digital identity and wallet implementations. 

Looking ahead

The OpenID Foundation remains committed to fostering innovation and security in the digital identity landscape. By sharing its extensive experience with OWF and other ecosystems, the Foundation continues to play a pivotal role in shaping the future of identity standards.

About the OpenID Foundation

The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate. Find out more at openid.net.  

The post OIDF Re-Elected to OpenWallet Foundation’s Board first appeared on OpenID Foundation.


Hyperledger Foundation

Celebrating Black History Month: Advancing FinTech and Blockchain Innovation at HBCUs

As we celebrate Black History Month, we reflect on the remarkable strides that Black Colleges and Universities (HBCUs) have made in financial technology and blockchain innovation. At the forefront of this progress is the National FinTech Center at Morgan State University, dedicated to equipping HBCU students and faculty with the knowledge, tools, and opportunities to lead in the rapidly

As we celebrate Black History Month, we reflect on the remarkable strides that Black Colleges and Universities (HBCUs) have made in financial technology and blockchain innovation. At the forefront of this progress is the National FinTech Center at Morgan State University, dedicated to equipping HBCU students and faculty with the knowledge, tools, and opportunities to lead in the rapidly evolving world of FinTech and blockchain.


DIF Blog

3 reasons why you should join the DIDComm User Group Meeting

Guest Blog By Colton Wolkins The DIDComm User Group Meeting is open to anyone interested in using and implementing DIDComm. This is a great place for developers to learn, ask questions, and share experiences! DIDComm (Decentralized Identity Communication) is rapidly becoming a fundamental protocol for secure, private, and interoperable messaging

Guest Blog By Colton Wolkins

The DIDComm User Group Meeting is open to anyone interested in using and implementing DIDComm. This is a great place for developers to learn, ask questions, and share experiences!

DIDComm (Decentralized Identity Communication) is rapidly becoming a fundamental protocol for secure, private, and interoperable messaging in decentralized identity systems. As adoption grows, developers and organizations worldwide are implementing DIDComm to enable seamless, trustable communication between digital identities. But navigating the technical landscape can be challenging—this is where the DIDComm User Group comes in.

Different from the DIDComm Working group, composed of DIF members actively contributing to the DIDComm Specification, the DIDComm User Group is an open forum for anyone interested in implementing and using the DIDComm protocol to connect with others, share experiences, and get answers to your questions. 

1. See Demonstrations and Presentations from Other DIDComm Implementers

Learn from real-world implementations! The User Group meetings often feature demonstrations and presentations from developers who are actively working with DIDComm. These sessions provide insights into how others are tackling challenges, designing solutions, and deploying DIDComm in various environments.

Recently, a demonstration involving a Raspberry Pi showcased how DIDComm can be used to send messages back and forth with another device privately, securely, and even when both devices are on different networks. Following the demonstration, a short Q&A session was held, leading into a discussion about how DIDComm can mitigate many of the issues that the industry has with IoT devices.

2. Receive Help with Your DIDComm Implementation

Are you facing technical issues while integrating DIDComm? Do you have questions about the DIDComm specification? The User Group meetings offer a collaborative space where developers can ask questions and receive support from peers and experts. Whether it’s debugging a problem, discussing best practices, or understanding implementation nuances, the User Group is an excellent place to get practical guidance from those who have been there before.

Just this last week, the Credo project reached out to the User Group due to a UI/UX concern that they had. After some discussion, the group came up with a better solution than the one proposed, and is coordinating follow-up.

3. Learn About DIDComm Adoption Across Industries and Regions

DIDComm is being adopted across industries, from finance and healthcare to travel and enterprise security. By attending these meetings, you gain insight into how organizations worldwide are leveraging DIDComm to enhance security and privacy in digital communications. Understanding the breadth of adoption can help you identify potential partnerships, new use cases, and emerging trends that may influence your own projects.

Read more about DIDComm and deployment examples from around the world.

Take Action: Join the Next Meeting User Group meeting

Join us and be part of the conversation shaping the future of decentralized identity communication! To accommodate a global audience, there are two meeting times—one convenient for North American participants and another scheduled for APAC-friendly time zones. 

DIF DIDComm User Group meeting (US Time Zones) DIF DIDComm User Group meeting (APAC/EU Time Zones)

Interested in shaping the DIDComm standard? Contact DIF about becoming a member and contributing to the DIDComm Working Group.

To stay updated on upcoming meetings and receive invitations, check the DIF events calendar or subscribe to the DIF newsletter.


MOBI

The Digital Future of Consumer Packaged Goods (CPG): Embracing Transparency and Insights with Digital Product Passports and Data Spaces Interoperability

The Digital Future of Consumer Packaged Goods (CPG) Privacy-Preserving Traceability with Digital Product Passports & Data Spaces Interoperability The Consumer Packaged Goods (CPG) industry is undergoing a rapid transformation. Consumers demand accountability and sustainability—over 70% want detailed product information—while new regulations like the EU’s Ecodesign for Sustainable [...]

The Digital Future of Consumer Packaged Goods (CPG)

Privacy-Preserving Traceability with Digital Product Passports & Data Spaces Interoperability

The Consumer Packaged Goods (CPG) industry is undergoing a rapid transformation. Consumers demand accountability and sustainability—over 70% want detailed product information—while new regulations like the EU’s Ecodesign for Sustainable Products Regulation (ESPR) and the Uyghur Forced Labor Protection Act (UFLPA) reshape the landscape. Big data leaks and supply chain disruptions have become the norm. As regulations grow increasingly stringent, consumer trust falters, and operational costs skyrocket, a paradigm shift is essential. The industry needs verifiable, trusted data seamlessly integrated — and a system that enables seamless, permissioned data exchange across the value chain.

Digital Product Passports (DPPs) are a key component of this transformation. A DPP is a secure, globally unique digital record that stores verifiable information about a product throughout its lifecycle—from raw materials to disposal. Widespread adoption of DPPs is critical for building trust, strengthening value chain resilience, and ensuring regulatory compliance. According to the EU, which now requires DPPs for nearly all products sold in the EU, “the DPP is designed to close the gap between consumer demands for transparency and the current lack of reliable product data.”

However, DPPs alone are not enough. To truly unlock their potential, we need a standardized framework for data spaces interoperability—a system that enables direct, seamless transactions between participants across industries. Modern value chains involve collaboration among thousands of stakeholders, including enterprises, regulators, and consumers. When these value chains function efficiently, they improve product quality, optimize resource allocation, and lower costs. Yet, despite technological advancements, achieving this level of coordination remains a challenge.

Introduction to MOBI’s Web3 Infrastructure

Today, many organizations rely on third-party platforms and proprietary applications for data exchange, resulting in fragmented, siloed systems. This lack of interoperability limits collaboration and hinders critical processes, from ensuring product safety and ethical sourcing to enabling proper recycling and end-of-life management. Recognizing this, MOBI was formed in 2018 as a neutral convener for organizations to develop standards and infrastructure for DPPs and data spaces interoperability. MOBI’s Web3 Infrastructure, comprising Citopia Decentralized Marketplace (DM) and the Integrated Trust Network (ITN), offers a secure, decentralized marketplace framework with standardized communication protocols. Think of it as a private internet, wherein entities can engage in secure, autonomous, encrypted transactions.

Citopia DM and the ITN are built for Self-Sovereign Data and Identity (SSDI). Each participant owns and manages their own Self-Sovereign Digital Twin (SSDT), which stores two things:

a globally-unique Decentralized Identifier (DID), which is anchored and validated in the ITN Verifiable Credentials (VCs) used for transactions in Citopia DM

This SSDT allows participants to engage in standardized, secure, and compliant transactions on Citopia DM with selective disclosure (data only goes to intended recipients). Both Citopia and ITN are system and cloud-agnostic, meaning stakeholders can seamlessly communicate while retaining their existing systems and web services. This removes the need for costly one-off integrations and eliminates prohibitive onboarding/maintenance costs, providing a robust foundation for data spaces interoperability.

Citopia DM unlocks new possibilities for DPPs by enabling direct peer-to-peer transactions between value chain participants. Removing reliance on third-party intermediaries lowers costs, increases traceability, and drives compliance with emerging regulatory requirements. Companies leveraging MOBI’s infrastructure can streamline operations, reduce data silos, and improve their ability to meet evolving consumer and regulatory demands. For regulators and consumers of CPGs, Citopia DM offers easy access to trusted DPPs issued by companies. For regulators, this makes it easy to verify compliance claims. For consumers, access to DPPs can inspire confident purchasing decisions and boost brand loyalty.

Laying the Foundation for Generative AI Applications

MOBI is going to take this one important step further. The robust infrastructure it has built not only addresses the critical need for tracebility and data exchange but also lays the foundation for powerful generative AI applications. By leveraging the rich, contextual data housed within DPPs and the seamless data flow facilitated by Citopia and SSDTs between rich data spaces, generative AI can move beyond generic analyses to provide participant-specific value. Imagine a scenario where:

Consumers can interact with generative AI agents to understand the complete lifecycle of a product they are considering purchasing. These agents, equipped with DPP data accessed through Citopia, can answer nuanced questions about a product’s sustainability footprint, ethical sourcing, or even detailed ingredient breakdowns, generating responses tailored to the individual consumer’s values and concerns. For example, a consumer with specific dietary restrictions or sustainability preferences could ask a generative AI agent: “Show me the carbon footprint and allergen information for the ingredients in this cereal, and suggest alternatives with lower environmental impact and no nuts.” The agent, accessing the DPP via Citopia, can generate a personalized, privacy-preserving response, drawing from verified data and offering actionable recommendations. CPG Companies can utilize generative AI to optimize their operations and gain deeper market understanding. Generative AI agents can analyze aggregated and anonymized DPP data from across the value chain within Citopia to identify supply chain inefficiencies, predict potential disruptions, or personalize marketing campaigns with unprecedented precision. For instance, a generative AI agent could analyze DPP data to recommend optimal sourcing strategies based on real-time insights into material availability, ethical considerations, and environmental impact, while ensuring compliance with regulations like UFLPA. Furthermore, generative AI can assist in generating customized sustainability reports or proactively flag potential regulatory compliance issues based on DPP data, streamlining operations and reducing risks. Regulators can employ generative AI agents to efficiently monitor and verify compliance with evolving regulations like the ESPR. These agents can be granted permissioned access to DPP data within Citopia, enabling them to automatically audit product information against regulatory requirements and identify potential non-compliance issues at scale. Generative AI can generate summaries of compliance status across product categories or highlight specific areas needing further investigation, significantly enhancing regulatory oversight and consumer protection.

The key to enabling these generative AI applications lies in the architecture of MOBI’s Web3 infrastructure. SSDTs with secure identities backed by the ITN ensure that each participant retains control over their data, can choose what data to disclose (and to whom), and can securely exchange verifiable data in a regulatory-mandated Zero Trust Architecture.

Generative AI agents operating within this framework can be designed to access and process data in a privacy-preserving manner. For example, they can utilize techniques like differential privacy or federated learning to generate insights from aggregated DPP data without needing to access or expose the raw, sensitive data of individual participants. Selective disclosure of VCs ensures that only authorized agents receive the necessary data points, minimizing the risk of data breaches or misuse.

Conclusion

This means that the combination of DPPs and interoperable data spaces facilitated by MOBI’s infrastructure and generative AI represents an entirely new business phase for the CPG industry. It moves beyond basic traceability to enable a future where data becomes a dynamic tool for generating participant-specific value, fostering deeper consumer trust, optimizing business operations, and ensuring robust regulatory compliance – all within a secure and privacy-respecting ecosystem.

MOBI’s Web3 infrastructure is, therefore, a game-changer for the CPG industry, offering a scalable, decentralized approach to data verification and interoperability while enabling participant-specific insights and recommendations using generative AI. As consumer expectations evolve and regulatory landscapes shift, embracing decentralized, self-sovereign solutions will be the key to sustainable growth and competitive advantage. The future of CPG lies in verifiable, trusted data, and MOBI is leading the way in building the infrastructure to support it.

The post The Digital Future of Consumer Packaged Goods (CPG): Embracing Transparency and Insights with Digital Product Passports and Data Spaces Interoperability first appeared on MOBI | The New Economy of Movement.


GS1

Deutsche Bahn (DB) + OHB: Standardized Product labelling

Deutsche Bahn (DB) + OHB: Standardized Product labelling The implementation of serialized labelling, which results from the current supplier conditions of Deutsche Bahn, poses great challenges for many suppliers in the railway sector. Questions such as "What do I have to mark and in what way?", "Where exactly does the co
Deutsche Bahn (DB) + OHB: Standardized Product labelling The implementation of serialized labelling, which results from the current supplier conditions of Deutsche Bahn, poses great challenges for many suppliers in the railway sector.

Questions such as "What do I have to mark and in what way?", "Where exactly does the code go?", "What exactly are GS1 standards?" or "Who will assure me that I am doing it correctly in the end?" are just some of the questions that companies are confronted with in the beginning of the project. This was no different for OHB Teledata GmbH. Here too, the company was faced with the decision of whether to tackle the issue internally and with an uncertain outcome or seek external help. In the end, they chose the support of the oneIDentity+ experts to successfully implement the project in an efficient and timely manner. This is the project report.

deutsche-bahn-ohb.pdf

Schaeffler + Swiss Federal Railways (SBB): Significant progress with the digitalization of its fleet

Schaeffler + Swiss Federal Railways (SBB): Significant progress with the digitalization of its fleet In partnership with Schaeffler, Swiss Federal Railways (SBB) has made significant progress with the digitalization of its fleet. The basis for the new life cycle management of bearing data is a data matrix code (DMC) on t
Schaeffler + Swiss Federal Railways (SBB): Significant progress with the digitalization of its fleet In partnership with Schaeffler, Swiss Federal Railways (SBB) has made significant progress with the digitalization of its fleet.

The basis for the new life cycle management of bearing data is a data matrix code (DMC) on the axlebox bearings that allows data on the bearing history – from production to operation and maintenance – to be retrieved and shared between various companies worldwide.

This creates a lot of benefits:

Storage of all bearing-related data over the entire life cycle End-to-end traceability of all processes Predictive maintenance thanks to timely identification of weak points Information gain for process and production optimizations Enabler for 100% return service within the framework of bearing reconditioning Basis for fast, reliable and sustainable action schaeffler-swiss-federal-railways-sbb.pdf

We Are Open co-op

How to be an Activist in a World of AI

In our final post in the AI and Activism series, we look to help you live your values as you explore new AI tools and services. Be sure and read the other three posts in this series to understand the narratives, complexities and principles that inform this work. Understanding Predominant Narratives in AI Systemic complexities with AI Starting principles for the ethical use of&nb

In our final post in the AI and Activism series, we look to help you live your values as you explore new AI tools and services. Be sure and read the other three posts in this series to understand the narratives, complexities and principles that inform this work.

Understanding Predominant Narratives in AI Systemic complexities with AI Starting principles for the ethical use of AI How to be an Activist in a World of AI

The landscape of technology, and of artificial intelligence (AI), is complex, as is its impact on our communities and the environment. We spend a lot of time thinking about how our technology choices relate to our values. It is challenging to navigate the powerful adversaries of hyper-capitalist economies and, sometimes, we struggle to balance between our values and what is practical. Educators, activists and campaigners are well aware of the power dynamics at play in our society.

The narrative surrounding AI often presents a binary choice between dystopian views, which dismiss human agency and engagement, and utopian perspectives, which assume that AI is a neutral “good” that can be enjoyed equally by all. This dichotomy oversimplifies the complexities of AI’s impact on our world. The reality lies in understanding how powerful forces and predominant narratives shape technology and society.

Lovelace GPU by Hanna Barakat & Cambridge Diversity Fund Our Hyper-Capitalist Economy

Our participation in hyper-capitalist economies comes with a cost: we are often forced to engage with systems that are designed to maximize profits over people and planet. Companies prioritize growth and innovation over social responsibility and environmental sustainability. Governments prioritize growth over well-being.

There are people and companies that work to undermine humanist efforts and maintain control over common narratives for the sake of profit and prestige. We are told that there is no other way. We, us normies, are asked to make concessions, while the ultra-rich continue to desecrate our planet and communities. Corporations, governments, and others use disinformation campaigns, propaganda and a slew of rhetorical fallacies to discredit our concerns about AI’s impact on society and the planet. Venture capital often prioritizes short-term gains over long-term sustainability. Companies use greenwashing tactics to present themselves as environmentally responsible, but they continue to prioritise profit over sustainability.

In our modern, very capitalist society, many of our social endeavours and non-profits rely on donations or grants from corporations and risk being co-opted (in a bad way) by these same interests. The promise of funding can create a Faustian bargain, where activists must compromise their values to secure resources for the work.

We all make adjustments to be able to live with ourselves.

Finding Ways to Live Our Values

It can be daunting to look at the ways in which our societies could be better and do the work to make them so. Our place in history seems to make it more difficult than ever to live our values, at least if those values are rooted in equity and solidarity.

One way, though, to live our values is to make hard choices and stand by them. We can choose whether or not to engage with a company, service, platform or even funding. Sometimes having a red line is a great way to live up to your own morals. Worried about your environmental impact? Learn about local and independent LLMs instead of defaulting to one of Big Tech’s AI services. Irritated by the proliferation of surveillance? Prioritize AI companies that have strong privacy policies.

Another way to live and share your values is to prioritize community-led initiatives, and help local organizations take ownership of decision-making processes. Small and medium-sized organisations need help piecing together how they might use AI and other technologies. Helping others understand what you choose to advocate for can help ensure that projects align with community values rather than corporate interests.

Perhaps the most important strategy is to understand the intersectionality in activism. We need international cooperation and knowledge sharing between climate justice, digital rights, and other justice movements. By working together, we can pool our resources and expertise to create more effective solutions for the challenges posed by AI.

Remember your power

We are surrounded by powerful forces that make activism for better, more ethical and sustainable AI difficult, but that was always the case. Activism is, at its heart, a rejection of the status quo. AI presents us with just another complex landscape that requires us to remember our own agency.

It has never been easy to be an activist and it has always been essential to recognize the power dynamics at play. When considering AI’s impact on society, acknowledge complexities and work together. We can create positive change in our communities and advocate for responsible AI development that prioritizes people and planet over profits.

Learn more about Harnessing AI for Environmental Justice and Digital Campaigning.

Technology has changed so much about our world. Now, AI is changing things again and at hyperspeed. We work to understand its potential implications for our communities, learners and programmes. Do you need help with AI Literacies, strategy, or storytelling? Get in touch!

How to be an Activist in a World of AI was originally published in We Are Open Co-op on Medium, where people are continuing the conversation by highlighting and responding to this story.


EdgeSecure

Accelerating Change with Business Process Management In Higher Education

The post Accelerating Change with Business Process Management In Higher Education appeared first on NJEdge Inc.

Monday, 24. February 2025

FIDO Alliance

Biometric Update: Biometrics connecting ID and payments through digital wallets, apps and passkeys

Biometrics are connecting with payment credentials, whether through numberless credit cards and banking apps or passkeys, as the concrete steps towards linking digital identity and payment systems shows up as […]

Biometrics are connecting with payment credentials, whether through numberless credit cards and banking apps or passkeys, as the concrete steps towards linking digital identity and payment systems shows up as a major theme in the week’s most-read stories on Biometric Update. Mastercard announced it will ditch the familiar credit card number in favor of on-device biometrics and tokenization, while everyone in digital wallets, from the EUDI Wallet Consortium to Fime and Mattr to Apple is looking at how to bring together identity and payments, and Visa arguing for the role of passkeys in a converged digital ID and payments ecosystem.


CPO Magazine: Passkey Authentication and Its Relevant Authentication Standards

Passkey authentication replaces traditional passwords with a pair of cryptographic keys—public and private. The private key stays on the user’s device, while the public key sits on the server. During login, […]

Passkey authentication replaces traditional passwords with a pair of cryptographic keys—public and private. The private key stays on the user’s device, while the public key sits on the server. During login, the server issues a challenge that only the private key can solve, and the response gets verified using the public key. No passwords are transmitted or stored, which reduces the attack surface significantly. Password leaks and brute-force attempts become non-issues because there is no static secret to steal or guess.

FIDO2 is a joint initiative by the FIDO Alliance and the World Wide Web Consortium (W3C) aimed at delivering streamlined, strong authentication without relying on passwords. It defines a set of technical components: WebAuthn and CTAP2 (Client to Authenticator Protocol). WebAuthn standardizes how a web application interacts with an authenticator—often a platform feature like a secure enclave on a phone or a hardware security key. CTAP2 governs how that authenticator communicates with the client device, such as a laptop or smartphone.


HealthcareIT: Passwords Are the Problem: How More Secure Authentication Methods Can Transform Healthcare Workflows

Username and password authentication is a fixture in healthcare but one that continues to hinder operations and put patient privacy – and care – at risk. In just the first […]

Username and password authentication is a fixture in healthcare but one that continues to hinder operations and put patient privacy – and care – at risk. In just the first three months of 2024, there were over 116 data breaches in the healthcare industry, allowing cybercriminals to access private patient data, medications, clinical records, Social Security numbers, and more by employing tactics like phishing emails and malware.

As a result, passwordless authentication is steadily gaining traction, enabling healthcare facilities to implement more secure user verification and streamline access management.

The transition to passwordless won’t happen overnight. However, we can expect continued adoption of passwordless methods over the next decade, as the challenges of traditional passwords become too glaring to ignore in this mission-critical industry.


Health Management: The Future of Healthcare Security: Embracing Passwordless Authentication

Traditional username and password authentication remains a standard practice in healthcare, but it increasingly compromises operational efficiency, patient privacy and care quality. In the first quarter of 2024 alone, over […]

Traditional username and password authentication remains a standard practice in healthcare, but it increasingly compromises operational efficiency, patient privacy and care quality. In the first quarter of 2024 alone, over 116 data breaches exposed sensitive patient data, including medications, clinical records and Social Security numbers. Cybercriminals use tactics like phishing and malware to exploit these vulnerabilities, underscoring the need for stronger authentication measures. As a response, passwordless authentication is gaining traction, offering a more secure and streamlined approach to access management. Although the transition will take time, the next decade will likely see widespread adoption of passwordless solutions as the limitations of passwords become too costly to ignore.


Elastos Foundation

The World Computer Initiative: Building Digital Freedom

Imagine opening your browser, typing in “pc2.net,” and instantly connecting to your personal Virtual Computer—powered by Elastos—where you log in via your own decentralized identity. In this private OS environment, your home NAS station or personal cloud server hosts a personal AI that securely processes your data, away from corporate eyes. You can talk to […]

Imagine opening your browser, typing in “pc2.net,” and instantly connecting to your personal Virtual Computer—powered by Elastos—where you log in via your own decentralized identity. In this private OS environment, your home NAS station or personal cloud server hosts a personal AI that securely processes your data, away from corporate eyes. You can talk to it, and also purchase digital goods—like software dApps or music—through the global interconnected dApp store or Elacity’s marketplace, paying creators directly for their value and downloading encrypted code from IPFS to your personal cloud with blockchain-validated ownership.

Your smart devices—from security cameras to door locks—communicate locally through your personal cloud, while your AI handles tasks like security and fitness analytics without leaking sensitive information. If you choose, you can also run a node to support the network—offering Elacity CDN services or acting as a BeL2 arbiter—and receive automatic rewards through your decentralized wallet. One click to opt out and you’re done—no centralized data centers, no middlemen, just peer-to-peer connectivity. This is Web3 as it was always meant to be: digital freedom and autonomy.

On January 31, 2025, the Elastos community approved the World Computer Initiative proposal. In simple terms, this initiative aims to transform Elastos from a powerful but hidden infrastructure into a consumer-friendly, decentralized operating system—a true “World Computer.” Today, computing is centralized. Big tech controls our data, identities, and computing power. Users rent access to clouds which are custodians over our digital lives, rather than truly owning our digital assets ourselves. So, what’s the simple solution?

Ownership: You should own your data and identity. Simplicity: Interactions should be as easy as typing a web address. Decentralization: Control should be spread out, not held by a single entity.

 

The Core Components

ElastOS Decentralized Compute (Puter): Imagine launching a virtual computer by typing “pc2.net” in your browser. This virtual computer runs decentralized apps without needing traditional servers. The basis of ElastOS will be Puter.com.

True Data Ownership (Elastos): As a user, you need to login to your computer, for this we use Elastos’ decentralised toolsets which let users manage their own digital identities and files on distributed networks, ensuring privacy and security without reliance on centralised platforms.

User-Friendly Experience (UX): Despite the Web3 technology, with familiar login methods like email or biometrics, the complex world of blockchain is hidden behind an easy-to-use interface. The toolsets to achieve this will be Particle Network and account and chain abstraction technology.

Marketplace (Elacity): Through toolsets like Elacity, a new marketplace will let creators and consumers buy, sell, and trade digital goods directly—without middlemen. This is very important to forming an economy on the World Computer, as making money in return for providing value is what drives economies, growth and adoption.

Decentralized Hardware (DePin): Users should connect to their own clouds, leveraging hardware like home NAS stations, providing complete privacy, the ability to run AI models and connect to other IoT devices p2p, and download dapps and nodes to interact with other interconnected hardware devices, earn rewards for supporting others and download data onto devices to enable true ownership with rights to decrypt stored in the users decentralised wallets. The DePin initiative creates a free market for compute and storage, moving away from big cloud providers.

 

In less than a week, led by Sash, Anders and Rong, they have onboarded Puter CEO and Founder Nariman Jelveh to the initiative, having a call together to discuss plans and aligning on upcoming goals for the World Computer Iniative (WCI).

 

Puter is now forked and available at PC2.net, providing the canvas to begin plugging in Web3 toolsets!

At the same time, Elacity has released its new labs website and is preparing a major release slated for late March. This version includes:

Simple Web2-like logins Channels with time-based subscriptions and ERC-20 token access Peer-to-peer messaging Royalty markets for creators

 

Additionally, the DePIN hardware is progressing quickly, with a NAS station that sports:

6T NPU and 8-core ARM v8 64-bit CPU Two SATA storage drives Ability to run both Android and Linux seamlessly Virtual machine and container support for flexible app deployment

This hardware is designed to be open-source friendly, so anyone can run software like DeepSeek AI or other Linux-based tools. Here is a look at the initial designs.

The Elastos infrastructure (like BeL2 and the ELA Mainchain) is robust but has remained behind the scenes. Now is the time to bring these innovations front and center. The proposal aligns technical teams and market strategies, ensuring continuous development and clear, milestone-driven progress. By returning control to users, this initiative makes decentralized computing not only possible but also practical and accessible. Starting late-march 2025, the roadmap unfolds:

Elacity v2 Launch: A consumer-ready platform with easy logins, secure messaging, and a dynamic digital marketplace. Integration with DePIN: Tying in decentralized storage and compute to create a seamless user experience. Full ElastOS Rollout: A complete decentralized operating system that transforms how we interact with digital technology.

So, imagine connecting to your Virtual Computer with a single web address, talking to your personal AI, and easily trading digital goods you fully own. This is how we reclaim digital freedom and a multi trillion dollar opportunity in advancing the internet forward as we know it for society to participate. Let’s build that future, together. Did you enjoy this article? To learn more, follow Infinity for the latest updates here!

Friday, 21. February 2025

FIDO Alliance

FIDO Alliance Melbourne Seminar 2025

Navigating Passkeys: A Deep Dive into FIDO Authentication in Australia Overview The FIDO Alliance recently held a pivotal one-day seminar exploring the transformative power of passkey authentication in Australia and […]
Navigating Passkeys: A Deep Dive into FIDO Authentication in Australia Overview

The FIDO Alliance recently held a pivotal one-day seminar exploring the transformative power of passkey authentication in Australia and beyond.

This dynamic event attracted 150-200 influential leaders and decision-makers from government, consumer, and enterprise sectors to explore the future of secure online identity in Australia, New Zealand, and overseas. The agenda covered use cases, case studies, and the latest data and collaboration happening to implement passkeys for consumers, workers, and governments regionally and around the world.

View the presentations below:

FIDO Alliance – Simpler Stronger Authentication.pptx from FIDO Alliance

From Authentication to Assurance – Managing risk in passkeys and beyond.pptx from FIDO Alliance

From Requirements to Rollout – VicRoads’ Experience with Passeys.pptx from FIDO Alliance

How to Simplify and Accelerate Passkey Adoption.pptx from FIDO Alliance

IdentityVerification IDV + Passkeys.pptx from FIDO Alliance

Insights from Large-Scale B2C Passkey Deployments.pptx from FIDO Alliance

Passkeys – Why Moving Now Makes Sense.pptx from FIDO Alliance

FIDO and Government:How Policymakers and Regulators are Thinking About Authentication.pptx from FIDO Alliance

Oasis Open

Invitation to comment on Lightweight Verifiable Schema Credential v1.0

OASIS and the LVCSP TC are pleased to announce that Lightweight Verifiable Schema Credential v1.0 CSD01 is now available for public review and comment.  This document defines a lightweight schema for Verifiable Credentials to support digital (also known as electronic) “Know Your Customer” (eKYC) processes, based on the W3C Verifiable Credential (VC) standards. Through adoption […] The post

Public Review - ends March 24th

OASIS and the LVCSP TC are pleased to announce that Lightweight Verifiable Schema Credential v1.0 CSD01 is now available for public review and comment. 

This document defines a lightweight schema for Verifiable Credentials to support digital (also known as electronic) “Know Your Customer” (eKYC) processes, based on the W3C Verifiable Credential (VC) standards. Through adoption of this standard, individuals are able to share their verified identity claims across different digital platforms and services. This standard is referred to as a “Lightweight Verifiable Credential Schema,” abbreviated as LVCS, because it provides a basic schema and format, without significant options or conditions.

The documents and all related files are available here:

Lightweight Verifiable Credential Schema Version 1.0

Committee Specification Draft 01

10 February 2025

Editable source:

https://docs.oasis-open.org/lvcsp/lvcs/v1.0/csd01/lvcs-v1.0-csd01.docx (Authoritative)

HTML:

https://docs.oasis-open.org/lvcsp/lvcs/v1.0/csd01/lvcs-v1.0-csd01.html

PDF:

https://docs.oasis-open.org/lvcsp/lvcs/v1.0/csd01/lvcs-v1.0-csd01.pdf

For your convenience, OASIS provides a complete package of the specification document and any related files in a ZIP distribution file. You can download the ZIP file at:

https://docs.oasis-open.org/lvcsp/lvcs/v1.0/csd01/lvcs-v1.0-csd01.zip

How to Provide Feedback

OASIS and the LVCSP TC value your feedback. We solicit input from developers, users and others, whether OASIS members or not, for the sake of improving the interoperability and quality of its technical work.

The public review starts 21 February 2025 and ends 24 March 2025 at 23:59 UTC. 

Comments may be submitted to the project by any person through the use of the project’s Comment Facility. TC members may submit comments directly to the TC’s mailing list. All others can submit comments by following the instructions listed on this page

All comments submitted to OASIS are subject to the OASIS Feedback License, which ensures that the feedback you provide carries the same obligations at least as the obligations of the TC members. In connection with this public review, we call your attention to the OASIS IPR Policy [1] applicable especially [2] to the work of this technical committee. All members of the TC should be familiar with this document, which may create obligations regarding the disclosure and availability of a member’s patent, copyright, trademark and license rights that read on an approved OASIS specification. 

OASIS invites any persons who know of any such claims to disclose these if they may be essential to the implementation of the above specification, so that notice of them may be posted to the notice page for this TC’s work.

Additional information about the specification and the LVCSP TC can be found at the TC’s public home page located here.

Additional references:

[1] https://www.oasis-open.org/policies-guidelines/ipr/

[2] http://www.oasis-open.org/committees/lvcsp/ipr.php

Intellectual Property Rights (IPR) Policy

The post Invitation to comment on Lightweight Verifiable Schema Credential v1.0 appeared first on OASIS Open.


Hyperledger Foundation

Mentorship Spotlight: BiniBFT Implementation - The Optimized BFT on Fabric

What We Worked On

What We Worked On


Elastos Foundation

Elastos 0.9.9 BPoS Upgrade: Everything You Need to Know

Fellow Elastos Community Members, we are pleased to announce the new Elastos.ELA version 0.9.9 release. This is a major upgrade that introduces important changes to the BPoS consensus mechanism, including malicious behaviour penalties and new rules for node operation. All node operators—especially BPoS node owners—must upgrade to this latest version as soon as possible to […]

Fellow Elastos Community Members, we are pleased to announce the new Elastos.ELA version 0.9.9 release. This is a major upgrade that introduces important changes to the BPoS consensus mechanism, including malicious behaviour penalties and new rules for node operation. All node operators—especially BPoS node owners—must upgrade to this latest version as soon as possible to continue earning rewards and ensure the network’s stability and security.

Elastos.ELA version 0.9.9 is available for download at: https://download.elastos.io/elastos-ela/elastos-ela-v0.9.9/

Below, we cover the key highlights and help answer frequently asked questions about this release.

1. Does this upgrade affect voters or just BPoS node owners?

This upgrade affects both node owners and voters (stakers):

BPoS Node Owners: Must upgrade their nodes to version 0.9.9. Failure to upgrade will mean they cannot participate in block production or earn block rewards. Voters (Stakers): If the node you have voted for becomes inactive or malicious, you will not receive rewards. So, it is essential to ensure your votes are placed on upgraded and properly functioning nodes. 2. When will slashing occur, and what exactly is considered malicious?

Slashing can occur whenever a node performs malicious actions after this upgrade goes live on mainnet. In other words, once the network is running version 0.9.9 with the penalty mechanism active, the offending node can be slashed. Three primary malicious behaviors trigger slashing:

Illegal Block
Proposing two different blocks at the same height within the same view.

Illegal Proposal
Proposing consensus on different blocks within the same view.

Illegal Votes
Casting both a “yes” and “no” vote for the same block within the same view, or casting multiple “yes” votes on different block proposals within the same view.

Any node engaging in any of these behaviors can be fined 200 ELA and marked as invalid, and stakers of that node will also be unable to receive rewards going forward.

3. Will node downtime be considered malicious?

No. If your node goes offline or fails to produce a block due to downtime or other operational issues, it is considered “inactive,” not “malicious.” In this case:

The node will not earn block rewards while offline. Voters on this node also will not earn rewards during the downtime. No additional 200 ELA fine is levied for mere downtime.
However, if a node misses three consecutive block productions (excluding Council nodes), it will be marked as inactive and will not receive block rewards until it’s back online. 4. Are the new penalty and inactivity rules below accurate?
Negligence: If an arbitrator node (excluding Council nodes) fails to generate a block three consecutive times, the node and its stakers will not receive block rewards; the node is marked as inactive. Council nodes missing three blocks in a row are also marked inactive and do not receive block rewards, but they incur an additional fine based on a specific formula. Do Evil: If a node commits malicious behavior (double-signing, illegal proposal, illegal votes), the node is fined 200 ELA, becomes invalid, and must be reactivated. Voters on the malicious node do not get fined but lose the ability to receive rewards since the node is invalid. Tokens collected from fines are burned (removed from circulation). If a node’s deposit falls below 2,000 ELA or its staked votes fall below 80,000 ELA, it becomes inactive.

This 0.9.9 BPoS upgrade is a major milestone for the Elastos network, bringing increased security, consensus resilience, and slashing mechanics to strengthen our ecosystem.

Node operators: Please upgrade immediately to avoid disruption in rewards and consensus participation. Voters: Make sure the nodes you support have upgraded to remain eligible for voting rewards.

As always, we appreciate your continued support of Elastos. Stay connected on our official channels for the latest news and follow us on X, and don’t hesitate to ask questions or report any issues via GitHub or our community channels. Thank you for helping secure and grow the Elastos ecosystem!

Thursday, 20. February 2025

OpenID

Attend the OIDF Workshop prior to IIW Spring 2025 on 7th April 2025

The OpenID Foundation will be holding a hybrid workshop on Monday, 7th April 2025, just ahead of the Spring 2025 Internet Identity Workshop (IIW).  This hybrid event will take place both in person at Google’s facility in Sunnyvale, CA, and online making it accessible to participants worldwide.   Event Details: 📅 Date: Monday, April 7th, […] The post Attend the OIDF Workshop prior to II

The OpenID Foundation will be holding a hybrid workshop on Monday, 7th April 2025, just ahead of the Spring 2025 Internet Identity Workshop (IIW). 

This hybrid event will take place both in person at Google’s facility in Sunnyvale, CA, and online making it accessible to participants worldwide.

  Event Details:

Date: Monday, April 7th, 2025

Time: 12:30 – 16:00 PST
Location: Google, 242 Humboldt Ct, Sunnyvale, CA 94089

Room: Zebra Shark
Virtual Option: Details on how to join virtually will be emailed to registrants nearer the time

This meeting is an excellent opportunity for the community to engage with fellow experts, share updates, and collaborate on the latest advancements in across the OIDF specifications and Community Groups. With IIW just around the corner, it’s the perfect chance to align efforts and gain valuable insights before the main workshop begins.

  Agenda Highlights: Update on OIDF’s strategic aims for 2025 Working group updates Discussion on emerging Digital ID trends Why Attend? Engage with leading experts in the industry  Shape the future and weigh in on the work of the Foundation Prepare for IIW with discussions relevant to the upcoming workshop

 

Whether you plan to join us in Sunnyvale, CA or virtually, we look forward to your participation in shaping the next phase of digital identity standards.

All registered participants will receive a link to participate virtually prior to the workshop. This is an after-lunch workshop with beverages and snacks provided to those attending in person. The Foundation’s Note Well Statement can be found here and is used to govern workshops.

 

We will publish the full agenda soon. In the meantime, you can get ahead and guarantee your place by registering your place today!

Please register Via Eventbrite HERE

The post Attend the OIDF Workshop prior to IIW Spring 2025 on 7th April 2025 first appeared on OpenID Foundation.


Human Colossus Foundation

Human Colossus Foundation at the 2025 Geneva Winter Summit: AI for Developing Countries

The Human Colossus Foundation (HCF) recently participated in the 2025 Geneva Winter Summit, a global gathering of AI experts, policymakers, and innovators hosted by the AI for Developing Countries Forum (AIFOD) at United Nation office at Geneva. The summit focused on AI’s potential for developing countries to drive equitable digital development, culminating in the AIFOD Geneva Winter Summit De

The Human Colossus Foundation (HCF) recently participated in the 2025 Geneva Winter Summit, a global gathering of AI experts, policymakers, and innovators hosted by the AI for Developing Countries Forum (AIFOD) at United Nation office at Geneva. The summit focused on AI’s potential for developing countries to drive equitable digital development, culminating in the AIFOD Geneva Winter Summit Declaration 2025, which charts a path for inclusive AI progress.

At the core of HCF’s contribution were two topics:

The importance of access to accurate data and its provenance in AI training especially for small and less digitalize nations.

The role of distributed governance in fostering ethical AI Agents ecosystems which can give the edge for developing countries without need to build massive data centers for huge AI models.

The Power of Accurate Data and Provenance in AI Training

For AI to serve communities effectively, it require access to massive data sets, best localize to avoid biases and misinformation. Unfortunately a lot of developing countries does not have such access, in many cases digital transformation just starting. This allows them to better prepare for upcoming needs of the data, they can start already shaping digital policies and strategies to invest from day one towards more accurate data ecosystems which could give them the edge relaying more on quality then quantity. AI models trained on verifiable, and well-documented data would be more accurate in their functions. Data provenance—the ability to trace data back to its source—is essential for ensuring AI models are transparent, reliable, and compliant with global data standards but as well allowing citizens and countries to verify potential bias and misinformation in AI-generated insights.

At the summit, HCF emphasized how Dynamic Data Economy (DDE) principles enable organizations to structure and verify data origins, ensuring AI models are trained on high-quality, trustworthy inputs.

Why does data provenance matter?

Enables to build trust and accountability in AI decision-making

Helps organizations comply with data protection regulations (e.g., GDPR, HIPAA)

Prevents bias and misinformation in AI-generated insights

HCF advocates for data-oriented architectures, where individuals and businesses can validate their data before it is used to train AI, ensuring a more transparent and responsible ecosystem.

Distributed Governance: The Key to Ethical and Sustainable AI

Governance models must evolve to keep pace with AI’s rapid growth. Traditional centralized governance structures often struggle to provide the inclusivity and adaptability needed for cross-border AI systems.

At the summit, HCF promoted distributed governance model to help with classification standards, cross-jurisdictional ethical alignment, and enhanced data transparency. A meta-governance framework would equip stakeholders with accurate, verifiable, and accessible information, enabling informed AI adoption that aligns with local regulations and ethical values. By promoting fairness, transparency, and accountability, this framework supports a responsible, sustainable AI-driven digital economy.

How can distributed governance benefit AI ecosystems?

Inclusive decision-making – Empowering communities, businesses, and policymakers to shape AI’s future

Enhanced accountability – Avoiding centralized control that can lead to bias or exploitation

Interoperability across jurisdictions – Helping AI operate ethically across borders without conflicting regulations

Through distributed governance models, AI can serve global communities fairly, ensuring that technological advancements reflect a shared ethical and socio-economic vision.

The 2025 AIFOD Declaration: A Commitment to Inclusive AI

The Geneva Winter Summit Declaration 2025 serves as a blueprint for AI governance, highlighting the need for:

Equitable access to AI advancements for developing nations

A balance between regulation and innovation

Stronger international collaboration to ensure ethical AI deployment

HCF stands committed to advancing these goals through the Dynamic Data Economy, ensuring AI serves all communities equally, ethically, and sustainably.

Looking Ahead: HCF’s Vision for Ethical AI Development

The Human Colossus Foundation will continue advocating for accurate data management, distributed governance, and responsible AI adoption. Through collaboration with global stakeholders, we aim to shape AI ecosystems that prioritize transparency, inclusivity, and sustainability.


IDunion

Das war die Digital Society Conference 2

Digitale Nachweise und Identitäten – die Zeit ist reif! 26. November 2024 Im Change Hub Berlin Hardenbergstraße 32, 10623 Berlin Am 26. November fand die Zweite Digital Society Conference (DSC2) im Change Hub in Berlin statt. Am 27. November fand im Rahmen der DSC2 ein Workshop zum Thema Organisationsidentitäten statt. Wie im vergangenen Jahr lag […]
Digitale Nachweise und Identitäten – die Zeit ist reif!

26. November 2024

Im Change Hub Berlin

Hardenbergstraße 32, 10623 Berlin

Am 26. November fand die Zweite Digital Society Conference (DSC2) im Change Hub in Berlin statt. Am 27. November fand im Rahmen der DSC2 ein Workshop zum Thema Organisationsidentitäten statt.

Wie im vergangenen Jahr lag der Schwerpunkt der DSC auf dem Thema Digitale Nachweise und Identitäten. In Keynote, Panels und interaktiven Worksessions wurden am 26.11.24 Genese und Rahmenbedingungen der europäischen digitalen Identitätslösung und der EUDI-Wallet erläutert und Fakten vermittelt. Auch sind wir der Frage nachgegangen, wie sich diese europäische Lösung im internationalen Kontext einordnet und was uns als Gesellschaft in den kommenden Jahren konkret erwartet. Denn die hoheitliche eID ist erst der Beginn einer schnellen und anspruchsvollen Entwicklung, in deren Folge auch Unternehmen eine digitale ID erhalten werden, um den Austausch von Daten und die Sicherheit der Identifikation von Entitäten zu gewährleisten.

An dieser Stelle danken wir den Referierenden, Panelisten und insbesondere unseren Förderern, ohne die wir die DSC2 nicht hätten realisieren können. 

DSC2 – Workshop 26.11.24

Digitale Nachweise und Identitäten – sind Sie bereit? 

Jörg Fischer, Bundesdruckerei

Download Keynote

Zu folgenden Themen bildeten sich Worksessions:

Anbindung von Kommunen und kleinen Unternehmen Wiki Data /Organisationsidentitäten / Definition ID-Diebstahl / Betrug Geschäftsmodelle / Payment Integration SDI-Initiativen außerhalb der EU / Interoperabilität Killer-Applikationen für Europäische Wallet (EUDI-Wallet) Und dann gab es noch eine Worksession unter dem augenzwinkernden Header „Dackel-Club“: Hier wurden Nutzungen und Anwendungen digitaler Identitäten diskutiert, die keine hoheitliche Identität benötigen. Zum Beispiel die Mitgliedschaft im Dackel-Club. Dies mag kurios klingen, doch wenn bedacht wird, dass mehr als 37 % der Bevölkerung der Bundesrepublik in Vereinen engagiert ist, so erscheint dieses Anwendungsfeld durchaus interessant, um Adaption und den Umgang mit Wallets voranzubringen.

Handout Wegweiser:

Schließlich wurde der Wegweiser Digitale Identitäten und Nachweise vorgestellt. Dieser ist Produkt der Zusammenarbeit innerhalb der Forschungsprojekte Sichere digitale Identitäten des BMWK (SDI) und steht nun zum Download zur Verfügung. Ziel des Wegweisers ist es, einen niederschwelligen Einstieg in die Begriffswelt digitaler Nachweise zu finden. Erläutert sind die Begriffe Wallet, digitale Identität, digitaler Nachweis, Signatur. Die Nutzung des Wegweisers ist frei. 

Download

DSC2 – Workshop 27.11.24

Am zweiten Tag der DSC2 fand ein Fachworkshop statt, der ganz dem Thema Organisationsidentitäten (OrgID) gewidmet war. Zunächst wurde in Vorträgen der aktuelle Sachstand präsentiert.  Dieser diente als Grundlage für die anschließende Arbeit in zwei großen Fachgruppen, deren Ziel es war, insbesondere die offenen Fragen und Punkte zu den Themenkreisen Zugang, Datenaustausch und technische Ausgestaltung weiter zu spezifizieren.  

EUID_Wallets_KYS

20241120_EUDI_Wallet_Architecture

Wednesday, 19. February 2025

OpenID

FAPI 2.0 Security Profile and Attacker Model Final Specifications Approved

The OpenID Foundation membership has approved the following OpenID Final Specifications:   FAPI 2.0 Security Profile: https://openid.net/specs/fapi-security-profile-2_0-final.html FAPI 2.0 Attacker Model: https://openid.net/specs/fapi-attacker-model-2_0-final.html A Final Specification provides intellectual property protections to implementers of the specification and is not subject
The OpenID Foundation membership has approved the following OpenID Final Specifications:   FAPI 2.0 Security Profile: https://openid.net/specs/fapi-security-profile-2_0-final.html FAPI 2.0 Attacker Model: https://openid.net/specs/fapi-attacker-model-2_0-final.html A Final Specification provides intellectual property protections to implementers of the specification and is not subject to further revision. The FAPI Final Specifications are the product of the FAPI Working Group.   The voting results were: ·        Approve – 82 votes ·        Object – 0 votes ·        Abstain – 14 votes Total votes: 96 (out of 401 members = 23.9% > 20 % quorum requirement) Marie Jordan – OpenID Foundation Secretary

The post FAPI 2.0 Security Profile and Attacker Model Final Specifications Approved first appeared on OpenID Foundation.


Elastos Foundation

BeL2 Update: Unified BTC Experience & Arbiter Incentives

Building on our previous breakthrough with the BeL2 V1 Beta, we’re excited to share our latest progress and roadmap updates. At its core, BeL2 is designed to leverage Bitcoin’s robust security and Elastos’ programmable framework to deliver true decentralization and financial sovereignty. Today’s update highlights improvements that simplify user interactions, align incentives for arbiters, and […]

Building on our previous breakthrough with the BeL2 V1 Beta, we’re excited to share our latest progress and roadmap updates. At its core, BeL2 is designed to leverage Bitcoin’s robust security and Elastos’ programmable framework to deliver true decentralization and financial sovereignty. Today’s update highlights improvements that simplify user interactions, align incentives for arbiters, and set the stage for broader network expansion.

Improving BeL2’s User Experience

We need to minimize friction. Soon, BTC users will be able to interact with BeL2 using only Bitcoin. Here’s how it works:

Collateralization: Users lock their BTC as collateral directly on the Bitcoin mainnet. Arbiter Fee Payment: Fees traditionally paid in ELA will soon be settled in BTC, thanks to our new upgrade coming early March, removing the need to source additional tokens. Gas Management: The ELA required for finalizing transactions on the ESC network will be easily provided by our upcoming ELA faucet. This means a streamlined experience where BTC holders can secure USDC loans without worrying about multi-token management or gas issues. Enhancing Arbiter Incentives: Stake ELA, Earn BTC

In our decentralized architecture, arbiters play a crucial role in ensuring transaction integrity and dispute resolution. To better align their incentives, we’re implementing a model where arbiters can stake ELA and earn rewards in BTC. This not only reinforces their commitment to network security but also provides a more attractive compensation structure—anchored by Bitcoin’s liquidity and robustness.

Technical Update:

BTC-Payment Arbiter Upgrade: Our engineers have completed the upgrade for the official Arbiter version, enabling BTC-based fee payments. Testing is ongoing, with the bulk work complete.

ELA Faucet Deployment: To further ease the transition, an ELA faucet is being deployed on the ESC network. This solution will automatically supply the necessary gas for transactions, providing a smooth experience for users. We anticipate completion as early as this week.

Looking Ahead: Roadmap and Partnerships

Near-Term Roadmap: With these enhancements, we’re targeting a major BeL2 update within the next two weeks. As soon as the engineering details are finalized, we’ll roll out an updated roadmap to detail our progress and next steps. Preliminary discussions with prospective partners are underway to use BeL2 for product solutions in both lending and stablecoin finance. To give BeL2 the best opportuntity to succeed, we are also exploring:

Cross-Network Expansion: Extending BeL2’s reach to broader EVM networks beyond ESC to make BeL2’s Native Bitcoin DeFi protocol available across high-traffic chains. The reason for these strategic discussions relates heavily to ESCs success to date and the need to position technology where it is in harmony with developers needs, rather than competing with ours. Optimised ELA Arbiter Participation: Enhancing ELA availability on these networks to enable ELA staking for arbiters, aligning with Elastos’ broader goals for scalability and decentralized governance.

This upcoming update in early March not only streamlines the process for BTC holders but also sets the foundation for robust, scalable, and secure financial applications. We invite our community to stay engaged as we finalize these improvements and look forward to sharing more detailed roadmap insights very soon. Stay tuned, and thank you for helping shape the future of decentralized finance with BeL2! Did you enjoy this article? To learn more, follow Infinity for the latest updates here!


Next Level Supply Chain Podcast with GS1

From Farm to Fork: The Logistics Behind Food Safety and Traceability

The Food Safety Modernization Act (FSMA) compliance deadline is approaching quickly, giving companies less than a year to meet new food safety and traceability requirements. But beyond compliance, why does traceability matter? In this episode, Wiggs Civitillo, Founder & CEO of Starfish,  joins hosts Reid Jackson and Liz Sertl to discuss how product traceability can streamline recalls, r

The Food Safety Modernization Act (FSMA) compliance deadline is approaching quickly, giving companies less than a year to meet new food safety and traceability requirements. But beyond compliance, why does traceability matter?

In this episode, Wiggs Civitillo, Founder & CEO of Starfish,  joins hosts Reid Jackson and Liz Sertl to discuss how product traceability can streamline recalls, reduce food waste, and build consumer trust. Inconsistent data and lack of interoperability are some of the biggest challenges companies face in food traceability. Starfish addresses these challenges by enabling secure, seamless data sharing across the supply chain.

Tune in to hear FSMA 204 explained and discover solutions to help companies stay compliant.

In this episode, you’ll learn:

Practical solutions to meet FSMA 204 requirements efficiently

The impact of real-time data on food safety monitoring

How companies can use traceability to build consumer trust

 

Jump into the conversation:

(00:00) Introducing Next Level Supply Chain

(01:41) Challenges and lessons from the IBM Food Trust

(05:25) How Starfish connects supply chains

(13:58) Recalls, food safety, and consumer trust

(19:19) Understanding FSMA 204 and compliance

(25:06) Benefits of product traceability

(30:39) Wiggs’ favorite tech tool

 

Connect with GS1 US:

Our website - www.gs1us.org

GS1 US on LinkedIn

 

Connect with the guest:

Wiggs Civitillo on LinkedIn

Check out Starfish


GS1

Maintenance release 2.12

Maintenance release 2.12 daniela.duarte… Wed, 02/19/2025 - 08:36 Maintenance release 2.12
Maintenance release 2.12 daniela.duarte… Wed, 02/19/2025 - 08:36 Maintenance release 2.12

 

Key Milestones:

See GS1 GDM Release Schedule

As content for this release is developed it will be posted to this webpage followed by an announcement to the community to ensure visibility.
GDSN Data Pools should contact the GS1 GDSN Data Pool Helpdesk to understand the plan for the update. Trading Partners should work with their Data Pools (if using GDSN) and/or Member Organisations on understanding the release and any impacts to business processes.

GDM 2.12 contains updated reference material aligned with ADB 2.6 and GDSN 3.1.29.

 

Updated For Maintenance Release 2.12

GDM Standard 2.12 (Nov 2024)

GDM Local Layers

China - GSMP RATIFIED (April 2022)

France - GSMP RATIFIED (November 2023)

Germany - GSMP RATIFIED (November 2023)

Poland - GSMP RATIFIED (November 2023)

Romania - GSMP RATIFIED (December 2021)

USA - GSMP RATIFIED (February 2023)

Finland - GSMP RATIFIED (November 2023)

Netherlands - GSMP RATIFIED (November 2024)

Italy - GSMP RATIFIED (May 2024)

 

Release Guidance

GDM Market Stages Guideline (June 2023)

GDM Attribute Implementation Guideline (Feb 2025)

GPC Bricks to GDM (Sub-) Category Mapping for GDM 2.14 (Feb 2025)

Attribute Definitions for Business (November 2024)

GDM (Sub-) Categories (October 2021)

GDM Regions and Countries (17 December 2021)

GDSN Release 3.1.29 (November 2024)

Tools

GDM Navigator on the Web 

GS1 GDM Attribute Analysis Tool (May 2024)

GDM Local Layer Submission Template (May 2024)

Training

E-Learning Course

Future Release Documentation

GPC Bricks to GDM (Sub-) Category Mapping for GDM 2.14 (Feb 2025)

About this standard Any questions

We can help you get help you get started using the GS1 standards

Contact your local office

Tuesday, 18. February 2025

DIF Blog

Cryptographic Pseudonyms: A Short History

Guest blog by Greg Bernstein, Dave Longley, Manu Sporny, and Kim Hamilton Duffy Following the IETF/IRTF Crypto Forum Research Group’s adoption of the BBS Blind Signatures and BBS per Verifier Linkability (“BBS Pseudonym”) specifications, this blog describes historical context and details of cryptographic pseudonyms, as

Guest blog by Greg Bernstein, Dave Longley, Manu Sporny, and Kim Hamilton Duffy

Following the IETF/IRTF Crypto Forum Research Group’s adoption of the BBS Blind Signatures and BBS per Verifier Linkability (“BBS Pseudonym”) specifications, this blog describes historical context and details of cryptographic pseudonyms, as well as the privacy features they enable.

The discussion of cryptographic pseudonyms for privacy preservation has a long history, with Chaum’s 1985 popular article “Security without identification: transaction systems to make big brother obsolete” (Chaum1985) addressing many of the features of such systems such as unlinkability and constraints on their use such as one pseudonym per organization and accountability for pseudonym use. Although Chaum’s proposal makes use of different cryptographic primitives than we will use here, one can see similarities in the use of both secret and “public” information being combined to create a cryptographic pseudonym.

Lysyanskaya’s 2000 paper (Lysya2000) also addresses the unlinkable aspects of pseudonyms but also provides protections against dishonest users. In addition they provide practical constructions similar to those used in our draft based on discrete logarithm and sigma protocol based ZKPs. Finally as part of the ABC4Trust project three flavors of pseudonyms were defined:

Verifiable pseudonyms are pseudonyms derived from an underlying secret key. Certified pseudonyms are certified pseudonyms derived from a secret key that also underlies an issued credential. Scope-exclusive pseudonyms are verifiable pseudonyms that are guaranteed to be unique per scope string and per secret key. Figure 1: Types of Pseudonyms

The BBS based pseudonyms in our draft are aimed primarily at providing the functionality of the pseudonym flavors 2 and 3 above.

How BBS is Fundamentally Better for Solving Dishonest Holder Fraud

Beyond the usual anti-forgery and tamper-evident protections that digital signatures provide, there are two different types of credential fraud that unlinkable credentials need to mitigate: third-party fraud and first-party fraud.

Third-party fraud is when one party uses another party's credentials without their knowledge or approval. This involves the theft of an honest holder's credentials.

First-party fraud is when a legitimate credential holder intentionally allows other parties to covertly use their credentials. This involves not the theft of credentials, but rather, a dishonest holder.

BBS signatures provides two different security features, one to address each of these cases:

1. Holder Multifactor Authentication

This security feature binds a BBS credential to a particular secret that only the holder knows and that an honest holder will not share access to nor use of. When a holder presents a BBS credential, a verifier can require that the holder prove use of this secret, thereby enforcing this protection on the holder's behalf. This is an anti-theft mechanism for mitigating third-party fraud.

Note: This feature is sometimes called "holder binding", but it might be more accurately understood as "Holder Multifactor Authentication". This is because the holder is not themselves bound to the credential and it is not a mitigation for first-party fraud. It instead works like another MFA device does when performing authentication. If a dishonest holder shares that MFA device or an API to use it, then someone else can unlinkably present their credential (with the holder's approval -- perhaps even for a fee). Stopping dishonest holders from doing this is a significant challenge that involves locking down users' hardware and software – and a single failure in this scheme could potentially result in an unlimited number of unlinkable, fraudulent presentations. This feature is therefore not a protection for verifiers, but rather one for honest holders against theft. With this in mind, it can be implemented such that a holder is free to use software or hardware of their choice to provide the protection, without requiring specific approval from the issuer.

2. Pseudonyms

This security feature binds a BBS credential to a secret that is constructed from inputs from both the holder and the issuer. When a holder presents a BBS credential, a verifier can require that the holder present a pseudonym that is based on this secret and a contextual identifier. Each time the same BBS credential is presented using the same contextual identifier, the pseudonym will be the same. This prevents a dishonest holder from covertly enabling an unlimited number of unlinkable presentations of their credential by any parties they authorize. It is an anti-cloning mechanism for mitigating first-party fraud.

Contextual identifiers can be any value, but need to be agreed upon between the verifier and the holder for a given use case. To give some examples for the presentation of credentials on the Web, a contextual identifier could be a URL like a Web origin (e.g., "https://website.example"), a Web origin with a certain path (e.g., "https://website.example/groups/wind-surfing"), or protocol-defined combination of a URL and a time range, allowing for pseudonyms to be "forgotten" or "rotated" after sufficient time has passed.

Note: Other credential schemes aim to mitigate first-party fraud by limiting the device and software that a holder chooses to engage with, sometimes referred to as "holder binding". With this approach, the holder's own device and software are trusted to be leveraged against them to constrain their behavior. This approach requires device and software allow lists, trust framework management, significant additional protocol security considerations, and ultimately means that the issuer chooses the holder's device and software (or provides them with a list of acceptable options from which they may choose). This has additional side effects, such as centralizing and vendor lock-in effects on the marketplace of devices and software. Finally, with this approach, it also only takes a single dishonest holder to thwart the protections of the device or software (that they have physical access to) to re-enable an unlimited number of unlinkable fraudulent presentations of a valid credential in the ecosystem. Catching this behavior after the fact logically requires being able to link presentations once again, one way or another, which would defeat the privacy aims of the scheme.

Overview: BBS Signature Bound Pseudonyms

The BBS signature scheme, the foundation for BBS Pseudonyms, is based on a three-party model:

Signer (also known as issuer): Issues credentials  Prover (also known as holder or user): Receives credentials  Verifier: Validates proofs

A prover obtains a BBS signature from a signer over a list of BBS messages and presents a BBS proof (a proof of possession of a BBS signature) along with a selectively disclosed subset of the BBS messages to a verifier.

Figure 2: Example of BBS Signature Flow

Each BBS proof generated is unlinkable to other BBS proofs derived from the same signature and from the BBS signature itself. If the disclosed subset of BBS messages are not linkable then they cannot be linked by their cryptographic presentation alone.

Note: the language used in this section is intentionally informal; for a more precise explanation of phrases like “…cannot be linked by their cryptographic presentation alone”, please see Lysya2000.

BBS pseudonyms extend the BBS signature scheme to “bind” a “cryptographic pseudonym” to a BBS signature retaining all the properties of the BBS signature scheme:

 A short signature over multiple messages Selective disclosure of a subset of messages from prover to verifier Unlinkable proofs.

In addition BBS pseudonyms provide for:

An essentially unique identifier bound to a signature/proof of signature whose linkability is under the control of the prover in conjunction with a verifier or group of verifiers. Such a pseudonym can be used when a prover revisits a verifier to allow a verifier to recognize the prover when they return or for the prover to assert their pseudonymous identity when visiting a verifier Assurance of per-signer uniqueness, i.e., the signer assures that the pseudonyms that will be guaranteed by the signature have not been used with any other signature issued by the signer (unless a signature is intentionally reissued). The signer cannot track the prover presentations to verifiers based on pseudonym values. Verifiers in separate “pseudonym groups” cannot track prover presentations. How BBS Pseudonyms Work Overview

To realize the above feature set we embed a two part pseudonym capability into the BBS signature scheme. The pseudonym’s cryptographic value will be computed from a secret part, which we call the nym_secret and a part that is public or at least shared between the prover and one or more verifiers. The public part we call the context_id. 

A simplified overview of this flow is shown in Figure 3:

Figure 3: Simplified overview of BBS Pseudonym flow Issuance

To bind a pseudonym to a BBS signature we have the signer utilize Blind BBS signatures and essentially sign over a commitment to the nym_secret. Hence only a prover that knows the nym_secret can generate a BBS proof from the signature (and also generate the pseudonym proof).

The prover chooses their (random) prover_nym and commits to it. They then send the commitment along with a ZKP proof that the prover_nym makes this commitment. The signer verifies the commitment to the prover_nym then generates the signer_nym_entropy and “adds” it to the prover_nym during the signature process. Note that this can be done since we sign over the commitment and we know the generator for the commitment.

Figure 4: Credential Issuance detailed flow

As in Lysya2000 we are concerned with the possibility of a dishonest user and hence require that that nym_secret = prover_nym + signer_nym_entropy be the sum of two parts where the prover_nym is a prover's secret and only sent to the signer in a blinding and hiding commitment. The signer_nym_entropy is “added” in by the signer during the signing procedure and sent back to the prover along with the signature. 

Verification

The pseudonym is calculated from nym_secret and context_id using discrete exponentiation.

nym_secret: A two-part secret combining: prover_nym: Generated and known only by the prover signer_nym_entropy: Contributed by the signer during signature context_id: A public identifier that is shared between the prover and one or more verifiers

This is similar to the computations in Lysya2000 and ABC2014. The pseudonym is presented to the verifier along with a ZKP that the prover knows the nym_secret and uses it and the context_id to compute the pseudonym value. A similar proof mechanism was used in Lysya2000. See chapter 19 of BS2023 for an exposition on these types of ZKPs.

Figure 5: BBS Pseudonym verification flow

See the appendix for a detailed diagram showing the complete BBS pseudonym issuance and verification flow.

BBS Pseudonym Example Applications Certifiable Pseudonyms

Certifiable pseudonyms work as follows:  the prover creates unique pseudonyms based on context_ids they choose. While the nym_secret is guaranteed unique to and by the issuer, the issuer never learns its value. 

When using the pseudonym, the prover presents both the context_id and pseudonym to identify themselves, along with any attributes (messages) they choose to reveal. No one else without the nym_secret and signature can produce a proof that they “own” the pseudonym. The prover can create as many different, unlinkable pseudonyms by coming up with different values for the context_id. 

Figure 6: In this example of Certifiable Pseudonyms, the prover chooses different a context_id per service they interact with Scope Exclusive Pseudonyms

With scope exclusive pseudonyms, the verifier or group of verifiers require the use of a specific context_id. This allows the verifier (or group of verifiers) to track visits by the prover using this credential/pseudonym. A verifier can limit data collection, i.e. data retention minimization, by periodically changing the context_id since the pseudonyms produced using different context_ids cannot be linked. For example a context_id like “mywebsite.com/17Nov2024” that changes daily means the verifier could only track visits daily.

Figure 7: Scope exclusive pseudonyms where verifiers, or groups of verifiers, require a specific context_id. In this example, scope is set a per-day level

Scope Exclusive Pseudonyms with Monitoring

Scope exclusive pseudonyms with monitoring enable regulated privacy in scenarios requiring third-party oversight. Consider a system for purchasing controlled chemicals: A prover with appropriate credentials uses a different pseudonym with each vendor. This prevents vendors from colluding on prices or learning secret formulas by tracking a prover's complete purchase history across different vendors. At the same time, for regulatory compliance and public safety, vendors must report all purchases to a monitor, including the pseudonym used and their vendor-specific context_id. The prover shares their nym_secret only with the monitor, allowing the monitor to link different pseudonyms to the same prover when necessary. This separation between nym_secrets and other credential-binding secrets is crucial - it enables regulatory oversight without introducing additional cross-tracking by vendors.

Figure 8: In this example, scope exclusive pseudonyms with monitoring are used to enable regulatory compliance while not reusing pseudonyms across vendors A Short Selection of References [Chaum1985] D. Chaum, “Security without identification: transaction systems to make big brother obsolete,” Commun. ACM, vol. 28, no. 10, pp. 1030–1044, Oct. 1985, doi: 10.1145/4372.4373. [Lysya2000] A. Lysyanskaya, R. L. Rivest, A. Sahai, and S. Wolf, “Pseudonym Systems,” in Selected Areas in Cryptography, vol. 1758, H. Heys and C. Adams, Eds., Lecture Notes in Computer Science, vol. 1758, Berlin, Heidelberg: Springer Berlin Heidelberg, 2000, pp. 184–199, doi: 10.1007/3-540-46513-8_14. [ABC2014] P. Bichsel et al., “D2.2 – Architecture for Attribute-based Credential Technologies – Final Version,” Aug. 2014. [Online]. Available: https://abc4trust.eu/download/Deliverable_D2.2.pdf. [Accessed: Feb. 10, 2025]. [BS2023] D. Boneh and V. Shoup, “A Graduate Course in Applied Cryptography,” Version 0.6. [Online]. Available: https://toc.cryptobook.us/book.pdf. [Accessed: Feb. 10, 2025]. [IETF-BBSblind-00] Internet Engineering Task Force, “BBS Blind Signatures – draft-irtf-cfrg-bbs-blind-signatures-00,” [Online]. Available: https://www.ietf.org/archive/id/draft-irtf-cfrg-bbs-blind-signatures-00.html#. [Accessed: Feb. 10, 2025]. [IETF-BBSlink-00] Internet Engineering Task Force, “BBS Per-Verifier Linkability – draft-irtf-cfrg-bbs-per-verifier-linkability-00,” [Online]. Available: https://www.ietf.org/archive/id/draft-irtf-cfrg-bbs-per-verifier-linkability-00.html#. [Accessed: Feb. 10, 2025]. Appendix: Complete BBS Pseudonym Flow (Issuance & Verification) Next Steps

Dive deeper into the BBS Blind and Per Verifier Linkability specifications: 

Review bbs-blind-signatures and bbs-per-verifier-linkability

Consider implementing the specifications to further evaluate the specs

Provide feedback in the IETF Crypto Forum Research Group

Join DIF’s Applied Crypto Working Group for ongoing meetings

Contribute to W3C Data Integrity Specification Development:

To evaluate how BBS can be used in the context of W3C Verifiable Credentials, join the W3C VC Data Integrity Community Group

Stay up to date: 

Subscribe to the DIF blog to receive updates


We Are Open co-op

Starting principles for the ethical use of AI

Part 3 of our series on AI and Activism looks at the seven principles we developed in collaboration with Friends of the Earth for climate justice and digital rights campaigners. Read or download the full report here. It further unpacks each principle and includes responsible practices and policy recommendations. This work was funded by Mozilla. Understanding Predominant Narratives in 

Part 3 of our series on AI and Activism looks at the seven principles we developed in collaboration with Friends of the Earth for climate justice and digital rights campaigners.

Read or download the full report here. It further unpacks each principle and includes responsible practices and policy recommendations. This work was funded by Mozilla.

Understanding Predominant Narratives in AI Systemic complexities with AI Starting principles for the ethical use of AI How to be an Activist in a World of AI

Our hope is that you will use these principles to help navigate the dilemmas of AI use and communicate your approach and choices effectively — both with your networks and the wider public.

7 Principles 1. Exploring AI with curiosity creates opportunities for better choices.

The first principle encourages us to understand that there are types of AI that can help us find new and innovative solutions to problems we already face. Viewing this conversation from a perspective of learning and curiosity, and with a sense of playfulness, can help activists find happiness and hope in an ever complex technological and social landscape.

AI is an increasingly divisive, strange and intriguing set of technologies. Encourage critical thinking about AI use in your communities, discussing and questioning its use and, where appropriate, exploring simpler, more sustainable alternatives.

2. Transparency around usage, data and algorithms builds trust

Transparency is essential to address the environmental impacts of AI. It is also a requirement for public trust and to ensure accountability. Transparency helps us make informed-decisions. Therefore, greater transparency in how AI is developed, trained and used as well as in how we, as environmentalists and campaigners are using AI, can have a profound impact on how technology influences our climate and our societies.

Encourage openness in AI development and usage, providing clear information about training data, environmental impacts, and possible unintended side effects.

3. Holding tech companies and governments accountable leads to responsible action.

Governments need to apply safety and transparency regulations, with strict consequences for non-compliance. Environmental justice campaigners could promote holistic assessment and accountability frameworks to help examine wider societal and environmental impacts. (Kazansky, 2022). These would extend beyond carbon emissions to include resource extraction and social ramifications.

Holding one another to account and taking responsibility for our actions are essential to societal progress. We must hold both organisations and companies responsible for the environmental effects of AI, challenging misleading claims and ensuring responsible industry practices.

Wire Bound by Hanna Barakat + AIxDESIGN & Archival Images of AI 4. Including diverse voices strengthens decision-making around AI.

LLM training data is usually based on the somewhat-indiscriminate ingestion of mass amounts of data using datasets that represent the good, the bad, and the ugly of human discourse and bias. As LLMs tend to be a black box, we often cannot see the source of where bias is coming from, and a lack of diverse perspectives in AI development means that the needs of marginalised communities are not met, further entrenching existing disparities.

Being inclusive is an intentional act incorporating a range of perspectives to help ensure that marginalised voices are heard. To help build fair and equitable societies, we must work to prevent further bias and existing inequalities from being entrenched in AI systems.

5. Ensuring AI systems are sustainable reduces environmental impact and protects natural ecosystems.

When we talk about ‘sustainability’ in relation to AI, the most obvious targets are energy consumption, water use and emissions from data centres. However, we also need to talk about wider sustainability issues, including natural ecosystems, human well-being and the sustainability of outsourcing decision-making to machines.

Overreliance on technology in general can lead to the deterioration of skills, and AI is no different. Being sustainable goes beyond reducing energy usage. It includes careful consideration about the use of valuable resources, including human labour. Promote energy-efficient AI models and adopt an approach that reduces environmental and societal harms at every stage of AI’s lifecycle.

6. Community is key to planetary resilience

It is important to establish channels for genuine community participation in AI-related discussions and decision-making. This involves ensuring access to relevant information and the opportunity for communities to provide free, prior, and informed consent . Developing shared principles and guidelines that prioritise transparency, equity, and inclusivity will empower communities to actively contribute to shaping AI solutions that align with their values and priorities.

Technology is a tool communities can use to bring about a better world, but it is solidarity that gets us there. Encourage AI development to take place according to the needs of minority and majority communities, our planet, and not just market forces.

7. Advocating with an intersectional approach supports humane AI.

There is significant overlap between the communities that drive forward progressive environmental justice policies and those who advocate for digital rights. (Kazansky, 2022). In recent years many tech activists have used their technical skills on behalf of justice based organisations. These organisations have sought out tech activists to help them mitigate and understand digital rights issues like data security, misinformation, or digital attacks.

While we see the centralisation of power in the tech industry and technology stacks owned by just a few mega-corporations, we also see intersectionality becoming a critical component in climate justice and digital rights campaigning.

Read or download the full report here.

Technology has changed so much about our world. Now, AI is changing things again and at hyperspeed. We work to understand its potential implications for our communities, learners and programmes. Do you need help with AI Literacies, strategy, or storytelling? Get in touch!

References

Kazansky, B., et al. (2022). ‘At the Confluence of Digital Rights and Climate & Environmental justice: A landscape review’. Engine Room. Available at: https://engn.it/climatejusticedigitalrights (Accessed: 24 October 2024)

Starting principles for the ethical use of AI was originally published in We Are Open Co-op on Medium, where people are continuing the conversation by highlighting and responding to this story.

Monday, 17. February 2025

EdgeSecure

The Edge Ecosystem of IT Virtualization Solutions and Services

The post The Edge Ecosystem of IT Virtualization Solutions and Services appeared first on NJEdge Inc.
Preface

Edge understands that many higher education institutions are grappling with significant financial challenges driven by a convergence of factors. Declining enrollment, fueled by demographic shifts and a shrinking pool of high school graduates, has reduced tuition revenue, particularly for institutions in regions most affected by the “enrollment cliff.” This has been exacerbated by increased competition among schools vying for fewer students, often resulting in higher financial aid expenditures and reduced tuition rates. Meanwhile, decreased public funding has forced many public colleges and academic institutions to rely more heavily on tuition income, straining budgets further. At the same time, rising operational costs—such as salaries, benefits, technology upgrades, and campus maintenance—have compounded these financial pressures. The lingering effects of the COVID-19 pandemic, including reduced revenues from housing and auxiliary services and increased expenditures for online learning and health measures, have only deepened these fiscal challenges.

Compounding these issues, disparities in institutional resources have widened. Wealthy academic institutions with large endowments have been better equipped to navigate financial turbulence, while smaller, tuition-dependent schools often face existential threats. Many students and families are questioning the value of higher education amid rising tuition and concerns about student debt, leading institutions to reevaluate program offerings to align with workforce demands. However, these adjustments often require substantial investments that can be difficult for struggling institutions to afford. The financial instability has led to an increase in college closures, mergers, and consolidations, particularly among smaller private institutions. In response, schools are exploring new revenue streams, shared service models, and operational efficiencies to remain viable, but the long-term sustainability of these measures varies significantly by institution.

Virtualized and outsourced information technology solutions can significantly reduce operational costs for higher education institutions by minimizing the need for on-premises infrastructure and reducing associated maintenance expenses. Virtualization enables institutions to consolidate servers, storage, and other IT resources, improving resource utilization and reducing the physical space and energy consumption required to support IT operations. Cloud-based platforms and services allow colleges and academic institutions to scale IT capabilities based on demand, avoiding the high upfront capital costs associated with purchasing hardware and software. These solutions also offer institutions the flexibility to pay only for what they use, enabling more predictable budgeting. Additionally, by shifting to cloud-hosted applications for administrative tasks, learning management systems, and data analytics, institutions can streamline operations and redirect resources to mission-critical activities such as teaching and research.

Outsourcing IT services, such as cybersecurity, help desk support, and software development, can further enhance cost efficiency. Partnering with specialized providers enables institutions to leverage expert knowledge and advanced tools without the expense of maintaining a large in-house IT team. Outsourced solutions often come with built-in updates, ensuring that institutions remain current with technology trends and compliance requirements without incurring additional costs. Furthermore, outsourcing frees up institutional resources to focus on strategic initiatives, such as improving student outcomes or enhancing the academic experience. By embracing virtualized and outsourced IT solutions, colleges and academic institutions can not only reduce operational costs but also increase agility, enabling them to adapt more effectively to a rapidly changing higher education landscape.

As higher education institutions grapple with escalating financial pressures, they are increasingly exploring innovative solutions to maintain operational efficiency and deliver quality education. The concept of “Virtual IT” has emerged as a compelling strategy, providing an alternative to the traditional in-house IT model by outsourcing specific technology functions to external providers or remote workforces. By adopting Virtual IT, colleges and academic institutions can harness the power of modern technology without incurring the substantial costs of managing comprehensive IT departments on campus.

The financial challenges facing higher education are significant, fueled by factors like declining enrollment, reductions in state funding, and growing operational expenses. In this context, maintaining a fully staffed and up-to-date IT department can strain institutional budgets. From personnel salaries and training to software licensing and infrastructure maintenance, the financial demands of an in-house IT team can be prohibitive, especially for smaller institutions. Virtual IT offers a pathway to financial sustainability, allowing educational institutions to reallocate resources to their core mission of teaching and learning.

This white paper explores the emergence and advantages of Virtual IT in higher education. It examines how outsourcing technology functions can lead to substantial cost savings, improve access to specialized expertise, and provide enhanced flexibility in adapting to evolving technological needs. By shifting select IT responsibilities to external vendors, institutions can achieve a balance between financial efficiency and technological advancement, positioning themselves to thrive in a dynamic educational landscape.

Higher education institutions are experiencing unprecedented financial pressures that necessitate innovative solutions to reduce operating costs, particularly in IT. Several key factors contribute to these economic challenges, including:

1. Declining Enrollment Rates
Demographic shifts, especially in the United States and parts of Europe, have led to a shrinking pool of college-age students, which has driven down enrollment numbers in many regions. Additionally, some potential students are questioning the return on investment of higher education, leading to fewer applicants and increased competition among institutions to attract students. Lower enrollment directly impacts tuition revenue, a primary funding source for many colleges and academic institutions, creating budgetary constraints across the institution, including IT.

2. Reductions in State and Federal Funding
Public institutions, in particular, rely heavily on state and federal funding, which has been on a downward trend in recent years. Economic recessions, political shifts, and competing priorities have resulted in decreased state appropriations for higher education. With fewer public funds, institutions face difficult choices, often needing to reallocate or reduce budgets for essential services, including IT. This funding shortfall pushes institutions to seek more cost-effective ways to maintain and upgrade their technology infrastructure.

3. Rising Operational Costs
Across the board, operational costs for higher education institutions have increased. The cost of utilities, campus maintenance, healthcare benefits, and other operational expenses continues to rise. Information technology, with its ongoing need for software licensing, hardware upgrades, cybersecurity measures, and personnel training, represents a significant portion of these expenses. As other areas also require increased funding, IT departments are under pressure to cut costs wherever possible.

4. Technological Advancements and Rapid Change
The pace of technological change in IT is faster than ever, with innovations emerging in areas such as cloud computing, artificial intelligence, data analytics, and cybersecurity. While these technologies present exciting opportunities, they also require continuous investments to stay current. For institutions already facing financial strain, the costs of frequent hardware upgrades, new software tools, and specialized personnel to implement and maintain these technologies can be unsustainable.

5. Increasing Need for Cybersecurity
Higher education institutions are prime targets for cyberattacks due to the sensitive data they hold and their often-vast and complex networks. Cybersecurity is a significant financial burden, as it requires not only up-to-date technology but also skilled personnel to monitor and protect the institution’s digital assets. Implementing comprehensive cybersecurity measures is costly, and many institutions struggle to keep up with the resources needed to secure their IT infrastructure adequately.

6. COVID-19 Post-pandemic Impact
The COVID-19 pandemic had a lasting impact on higher education, with institutions forced to shift to online and hybrid learning models nearly overnight. This shift required substantial investments in digital infrastructure, training, and support to accommodate remote teaching and learning. While many institutions received temporary federal support to cover pandemic-related costs, these funds were finite, and the increased reliance on technology continues to demand investment long after the initial transition. The pandemic also led to revenue losses in other areas, such as housing, dining, and auxiliary services, compounding financial pressures.

7. Pressure to Maintain Competitive Edge
As more institutions vie for a smaller pool of students, maintaining a competitive edge is crucial. Technology-enhanced learning environments, smart classrooms, and digital campus experiences are increasingly seen as differentiators. However, creating and sustaining these environments can be costly, and with limited budgets, institutions often struggle to balance the need for innovation with fiscal responsibility. Virtual IT offers a potential solution by providing access to modern technology without requiring the heavy capital investments associated with maintaining an in-house IT department.

8. Increasing Student Expectations
Today’s students expect their institutions to provide a seamless digital experience that includes fast Wi-Fi, virtual collaboration tools, online services, and secure access to academic resources from anywhere. Meeting these expectations requires continuous investment in IT infrastructure and services. However, with financial pressures mounting, institutions are challenged to deliver this level of service with limited resources, making Virtual IT an appealing solution for improving service while managing costs.

Given these financial pressures, higher education institutions are exploring ways to lower their IT operating costs without sacrificing quality. Outsourcing specific IT functions, leveraging cloud-based solutions, and employing remote technical support teams allow institutions to reduce overhead costs associated with staffing, equipment, and software maintenance. Virtual IT provides access to specialized skills, economies of scale, and flexible service models that can be tailored to an institution’s specific needs, helping them stay technologically competitive while managing costs. By adopting Virtual IT, institutions can reallocate funds toward their core educational mission, ensuring they continue to deliver value to students despite the economic challenges they face.

What are Edge's 10 Predominant Virtualized IT Solutions and Services?

One of the most common forms of virtual IT is moving to cloud-based solutions for data storage, hosting, and software applications. Instead of maintaining expensive on-premise hardware and software, institutions can rely on cloud providers like Amazon Web Services (AWS), Microsoft Azure, or Google Cloud to manage infrastructure, providing scalable solutions that align with the institution’s needs.

HOW IT WORKS

Cloud-Based Infrastructure: Instead of maintaining on-premise data centers, academic institutions and colleges can transition their IT infrastructure to the cloud, relying on providers like AWS, Microsoft Azure, or Google Cloud. This action involves migrating data storage, hosting, and software applications to these secure, scalable platforms. Cloud providers handle the maintenance, updates, and infrastructure management, freeing up the institution’s internal IT resources for other tasks.

Scalable Solutions: Cloud services allow higher education institutions to scale their computing resources up or down based on current needs. This flexibility is essential during peak usage times, like the beginning of semesters or when large-scale research projects demand increased computing power. Cloud platforms offer elastic capabilities, meaning academic institutions only pay for what they use, optimizing both performance and costs.

Managed Services: Many cloud providers offer managed services for software applications, security, and infrastructure, which can further offload operational responsibilities from internal IT teams. This enables institutions to access advanced features like automated backups, data encryption, and load balancing without having to manage them in-house.

BENEFITS

Cost Efficiency: Moving to the cloud reduces the need for on-site hardware, reducing capital expenditures and ongoing maintenance costs. Academic institutions avoid the upfront costs of purchasing servers, storage devices, and other infrastructure, instead paying for cloud services on a subscription or usage-based model. This model supports budgeting flexibility, as costs align more closely with actual demand.

Scalability and Flexibility: Cloud platforms offer unprecedented scalability, enabling institutions to easily scale resources as their needs grow. Whether it’s adding more storage capacity or scaling up computing power for specific research needs, academic institutions can adjust their cloud infrastructure to meet demand without worrying about overcapacity or underutilization.

Enhanced Security and Reliability: Leading cloud providers invest heavily in security and offer a range of built-in features such as multi-factor authentication, encryption, and compliance with industry standards like GDPR or HIPAA. Cloud services also often provide robust disaster recovery solutions, ensuring that institutional data is protected and recoverable in case of hardware failure or cyberattacks.

Focus on Core Objectives: By offloading infrastructure management to cloud providers, academic institutions can focus more on their core objectives, such as delivering quality education, conducting research, and supporting faculty and students. IT staff are freed from routine maintenance tasks and can focus on more strategic initiatives.

EXAMPLE

A mid-sized public academic institution might have traditionally relied on an on-premise data center to host its website, student portal, and faculty collaboration tools. However, the academic institution experiences frequent spikes in demand during registration periods, and maintaining the infrastructure is costly and labor-intensive. By moving to a cloud-based platform like AWS, the academic institution can offload its infrastructure management to a cloud provider, ensuring the website can handle high traffic volumes without downtime. The academic institution can also scale up storage and computing capacity as needed during busy periods, and scale down during quieter times, minimizing costs. Additionally, by using cloud-hosted tools like Office 365 and Google Workspace, faculty and students can access their work from anywhere, enhancing collaboration and reducing reliance on internal hardware.

Rather than staffing an entire IT department, academic institutions can outsource specific functions to third-party vendors. For example, IT management services such as network monitoring, helpdesk support, cybersecurity, and system administration can be provided by companies that specialize in these areas. This can free up the academic institution’s internal staff to focus on more strategic initiatives, such as enhancing teaching and research technologies.

HOW IT WORKS

Outsourcing Specific IT Functions: With Managed IT Services, academic institutions can delegate key IT functions to third-party vendors who specialize in areas such as network monitoring, helpdesk support, cybersecurity, system administration, and software management. Rather than hiring and managing an entire internal IT team, academic institutions can work with a service provider to handle the technical aspects of their infrastructure.

24/7 Support and Monitoring: Managed IT services typically include around-the-clock monitoring of systems, ensuring that networks, servers, and critical applications are functioning optimally. Providers often use advanced tools to detect issues early, preventing downtime before it affects users. Helpdesk services are also available to assist students, faculty, and staff with technical problems in real-time, reducing internal support burdens.

Cybersecurity and Compliance: Many Managed IT Service providers offer specialized cybersecurity services, including threat detection, data encryption, firewall management, and compliance with industry standards. Academic institutions, which often deal with sensitive student data and research materials, benefit from the expertise of security professionals who can help mitigate risks and ensure regulatory compliance.

System Administration and Maintenance: Managed services can also include regular system administration tasks, such as software updates, patch management, backups, and disaster recovery planning. This allows academic institutions to ensure their IT infrastructure remains up-to-date and secure without dedicating internal resources to these time-consuming tasks.

BENEFITS

Cost Savings: By outsourcing specific IT services, academic institutions can avoid the high costs associated with hiring, training, and retaining a full in-house IT team. Managed IT providers typically offer flexible pricing models based on the level of service needed, enabling institutions to choose packages that fit their budget. These services also reduce the cost of hardware and software maintenance by ensuring efficient and proactive system management.

Access to Specialized Expertise: Managed IT services bring in expert professionals who specialize in areas like cybersecurity, cloud computing, and network infrastructure. Academic institutions benefit from the high-level expertise that might otherwise be costly or difficult to hire for a full-time, internal team. This expertise is particularly valuable in fields like cybersecurity, where threats are constantly evolving.

Increased Focus on Core Priorities: With routine IT management outsourced to experts, academic institution staff can focus more on strategic, value-driven initiatives such as enhancing classroom technologies, supporting research innovation, or improving student engagement platforms. This shift in focus helps maximize the impact of IT resources on the institution’s core mission.

Scalability and Flexibility: Managed IT services offer scalability, allowing academic institutions to adjust the level of support as needs change. For instance, during periods of peak demand (e.g., the start of a semester or when a new research project kicks off), additional services can be quickly brought in. Conversely, during quieter periods, services can be scaled back, ensuring that the institution is only paying for what it needs.

Enhanced Security and Compliance: By leveraging the expertise of managed service providers, academic institutions can bolster their cybersecurity posture and ensure compliance with regulations like GDPR or FERPA. Managed services often include proactive risk management and monitoring, which helps institutions avoid costly breaches or non-compliance penalties.

EXAMPLE

A large research academic institution might have a small internal IT team focused on supporting faculty and staff with technology needs for teaching and research. However, the academic institution struggles to keep up with the demands of maintaining network security, monitoring systems for vulnerabilities, and providing 24/7 helpdesk support for students. By outsourcing network monitoring and cybersecurity to a managed services provider, the academic institution can ensure that its IT infrastructure is continuously monitored for potential threats. The provider also handles system updates and ensures compliance with data privacy laws, freeing up the internal team to focus on supporting the academic institution’s academic and research missions. Additionally, when students experience technical issues with the academic institution’s online portal, the outsourced helpdesk team is available around the clock to provide immediate support, improving user experience while alleviating the burden on internal IT staff.

Virtual IT teams can be spread across the globe, allowing institutions to tap into talent pools in regions with lower labor costs. For example, academic institutions could hire remote IT staff from countries with lower wage levels for roles such as system administrators, software developers, or technical support agents. This strategy can help reduce personnel costs without sacrificing expertise or service levels.

HOW IT WORKS

Global Talent Pool: Virtual IT teams leverage the ability to hire professionals from anywhere in the world, enabling academic institutions to access a broader and more diverse talent pool. By employing remote IT staff in regions with lower labor costs, institutions can reduce their overall personnel expenses without compromising on the quality of service or technical expertise.

Distributed Workforce: Remote workforces operate from multiple geographic locations, often across different time zones. This structure allows academic institutions to benefit from extended support hours and faster response times, as tasks can be handled around the clock by team members in various regions. For example, technical support can be provided during off-hours by remote agents, ensuring that issues are addressed without delay.

Flexible Roles and Responsibilities: Academic institutions can hire remote staff for specific IT functions such as system administration, software development, technical support, or database management. These roles can be filled by individuals with the relevant skills, whether it’s a network engineer from Eastern Europe, a software developer from Southeast Asia, or a cybersecurity expert from Latin America. Remote work platforms and collaboration tools facilitate communication and project management, ensuring seamless teamwork despite geographical distances.

Cost Optimization: By hiring remote staff in regions with lower wage levels, academic institutions can achieve significant savings compared to hiring local professionals with similar qualifications. This cost advantage allows institutions to stretch their budgets further, enabling them to invest in other critical areas such as student services, academic resources, or research initiatives.

BENEFITS

Cost Savings: One of the primary advantages of employing a remote workforce is the reduction in labor costs. By hiring IT professionals in regions with lower wage standards, academic institutions can save on salaries, benefits, and other employment-related expenses. These savings can be redirected to other priorities, such as enhancing campus infrastructure or developing new academic programs.

Access to Specialized Expertise: Remote workforces open up access to a global talent pool, which means academic institutions can find the specialized skills they need at competitive rates. For example, a academic institution could hire a cybersecurity expert from India, a cloud architect from Brazil, or a software developer from the Philippines—all of whom bring unique expertise and experience to the institution.

Increased Flexibility and Scalability: Hiring remote IT staff allows academic institutions to quickly scale their teams based on evolving needs. For example, if a academic institution is launching a new research initiative or undergoing a major IT overhaul, it can easily hire additional remote staff with the required skills for a limited period, avoiding the overhead costs of full-time hires. This flexibility is especially valuable during times of growth or peak demand.

24/7 Support and Coverage: With a distributed remote workforce, academic institutions can achieve continuous support coverage. By leveraging time zone differences, academic institutions can ensure that technical support, system monitoring, and issue resolution happen at all hours, providing a higher level of service to students, faculty, and staff across different locations.

Diverse Perspectives and Innovation: Remote teams often bring diverse perspectives and ideas, which can enhance problem-solving and innovation. Academic institutions benefit from the cultural diversity and unique approaches that remote workers bring to the table, especially when it comes to developing creative solutions for technology challenges.

EXAMPLE

A medium-sized academic institution in the U.S. needs additional IT support to handle an expanding online course offering and to maintain a growing research infrastructure. While the academic institution’s internal IT team is focused on day-to-day campus technology needs, they require extra help for system administration, software development, and technical support. To keep costs under control, the academic institution hires a system administrator from Mexico, a software developer from the Philippines, and a technical support agent from Ukraine.

By having remote staff in different regions, the academic institution can provide around-the-clock support for students and faculty, address technical issues faster, and implement new systems with greater efficiency. The cost savings from hiring these remote professionals—who are skilled and experienced but at a fraction of the cost of local hires—allow the academic institution to reinvest those savings into its academic programs and student services, further enhancing the educational experience for its community.

Many institutions are involved in cutting-edge research that relies heavily on IT infrastructure. Instead of investing heavily in building and maintaining research-specific IT environments, schools can partner with external research organizations or cloud-based platforms that offer specialized computing resources, such as high-performance computing (HPC) or machine learning infrastructure.

HOW IT WORKS

Collaborating with External Research Partners: Academic institutions can outsource their research and development (R&D) needs to specialized external organizations or cloud service providers that offer cutting-edge IT infrastructure. Rather than investing in expensive on-site hardware, academic institutions can tap into cloud-based platforms or partner with external research institutions that provide high-performance computing (HPC), machine learning (ML) infrastructure, and other specialized resources.

Cloud-Based Research Infrastructure: Instead of building and maintaining their own high-performance computing clusters, academic institutions can rent or subscribe to cloud services like AWS, Microsoft Azure, or Google Cloud, which offer powerful computing resources optimized for research. These platforms provide the scalability, speed, and flexibility needed for advanced research tasks, such as data modeling, simulations, and machine learning experiments.

Access to Specialized Tools and Platforms: Outsourcing R&D to specialized providers often grants academic institutions access to advanced research tools and platforms that might otherwise be cost-prohibitive. For example, cloud providers offer machine learning frameworks, data analytics tools, and scientific computing platforms that are optimized for large-scale research projects. Academic institutions can also collaborate with organizations that offer niche expertise in specific fields, such as bioinformatics, computational physics, or artificial intelligence.

Flexible and Scalable Resources: Outsourcing R&D infrastructure means that academic institutions can scale computing resources up or down based on the demands of specific research projects. Whether conducting intensive data analysis or running large-scale simulations, academic institutions can instantly access the computing power they need without having to invest in permanent infrastructure. This flexibility helps reduce costs while ensuring that research teams always have the resources required to meet project demands.

BENEFITS

Cost Efficiency: By outsourcing R&D infrastructure, academic institutions can avoid the high capital costs associated with building and maintaining specialized research environments. Cloud services and external research partnerships often operate on a pay-as-you-go model, allowing academic institutions to pay only for the computing resources and services they actually use. This significantly reduces upfront investments and ongoing maintenance costs associated with owning high-performance computing systems.

Access to Cutting-Edge Technology: Outsourcing R&D provides academic institutions with access to the latest advancements in research technology and computing infrastructure without the need to manage these systems internally. Cloud platforms continuously update their infrastructure to provide the most advanced tools, ensuring that academic institutions can leverage the latest technologies, such as quantum computing, machine learning frameworks, and artificial intelligence algorithms, without heavy internal investment.

Scalability and Flexibility: Research projects often experience fluctuating computing demands, with periods of intense data analysis or large-scale simulations. Outsourcing R&D infrastructure allows academic institutions to scale their computing resources according to the size and scope of each project. They can ramp up resources during peak usage and scale down when projects are completed, ensuring they only pay for the resources they need.

Focus on Core Research: By outsourcing the technical infrastructure management, academic institutions can focus more on their core research activities, such as developing new theories, conducting experiments, and analyzing data. The burden of maintaining specialized IT environments, managing server hardware, and optimizing computational resources is shifted to external providers, allowing faculty and researchers to concentrate on advancing knowledge in their fields.

Collaboration and Expertise: Outsourcing R&D infrastructure often involves collaboration with external research organizations or cloud service providers, which can bring additional expertise and resources to a project. This collaboration can lead to new insights, cross-disciplinary partnerships, and access to specialized knowledge that would otherwise be unavailable to the academic institution.

EXAMPLE

A large academic institution with a prominent biomedical research department is working on a project to analyze large genetic datasets using machine learning models. The research requires significant computational power for tasks such as running simulations and processing terabytes of data. Rather than investing in the development of a custom high-performance computing (HPC) cluster, the academic institution partners with a cloud service provider like AWS, which offers specialized tools for bioinformatics, machine learning, and data storage.

Through this partnership, the academic institution gains access to powerful, scalable computing resources without having to manage the infrastructure internally. Researchers can run complex algorithms and simulations using the cloud platform’s machine learning models, speeding up their analysis and reducing costs. The academic institution only pays for the computing resources it uses, allowing it to allocate its budget more efficiently. Moreover, the research team benefits from the cloud provider’s constant updates and access to cutting-edge technologies, helping them remain at the forefront of biomedical research without needing to maintain the hardware themselves.

Additionally, by partnering with Edge’s cloud-based research platforms, the academic institution can easily collaborate with other institutions and researchers worldwide, sharing data and insights in real-time, further enhancing the scope and impact of their research.

Higher education institutions can participate in shared-cost models, where multiple institutions pool their resources to collectively pay for IT infrastructure, services, or support. This allows each participating institution to benefit from enterprise-level services and technology at a fraction of the cost they would incur individually. For instance, academic institutions could share the cost of high-end data storage, cybersecurity infrastructure, or cloud services, spreading the expenses across a network of institutions. Shared cost models can also extend to staffing and specialized technical roles, where smaller schools gain access to high-level expertise by collaborating with larger institutions or third-party providers.

HOW IT WORKS

Pooling Resources Across Institutions: In a shared-cost model, multiple academic institutions and colleges collaborate to share the costs of IT infrastructure, services, and specialized support. Institutions that might otherwise struggle to afford enterprise-level technologies can combine their budgets to access advanced systems and services, such as high-end data storage, cybersecurity infrastructure, or cloud computing resources. This approach allows them to benefit from economies of scale while maintaining control over their individual operations.

Shared IT Services and Infrastructure: Rather than each institution purchasing and maintaining its own IT systems, participating schools collectively invest in and share access to high-quality infrastructure. For example, a group of academic institutions may collectively fund a centralized cloud service for data storage or a shared cybersecurity platform to protect all members’ networks. They could also pool resources to access expensive software or subscription-based services, ensuring that all schools, regardless of size or budget, can benefit from the same tools.

Collaborative Staffing and Expertise: Shared-cost models can also extend to staffing, allowing smaller institutions to benefit from specialized technical expertise they might not otherwise be able to afford. For instance, a consortium of smaller colleges may partner with a larger academic institution to share the costs of hiring high-level professionals, such as data scientists, cybersecurity experts, or system architects. These experts would be responsible for managing shared services or providing consulting to all participating institutions, giving smaller schools access to top-tier talent at a fraction of the cost.

Flexible and Scalable Solutions: Shared-cost models allow institutions to scale their shared services based on demand. If a particular service, such as cybersecurity, requires additional resources due to an increased number of cyber threats, the cost can be adjusted across the participating institutions. Likewise, if an institution needs additional storage or computing resources for a specific research project, the shared infrastructure can scale to meet those demands, with costs divided proportionally among the group.

BENEFITS

Cost Efficiency: Shared-cost models significantly reduce the financial burden on individual institutions. By pooling resources, academic institutions and colleges can access enterprise-level technology and services at a fraction of the cost they would incur if they had to purchase and maintain these resources on their own. This cost-sharing approach ensures that even smaller or budget-constrained schools can access high-end solutions that would typically be beyond their reach.

Access to High-Quality Infrastructure: Smaller institutions can benefit from the same high-end infrastructure, security, and IT services as larger academic institutions. This includes advanced cloud computing, cybersecurity protections, and data storage solutions that would typically require significant investments in hardware, software, and staffing. Shared-cost models democratize access to technology and resources, leveling the playing field between institutions of different sizes.

Access to Specialized Expertise: Through shared-cost models, smaller academic institutions and colleges can access highly specialized technical expertise that they may not have the budget or need to support in-house. By pooling resources with other institutions, they can afford to employ full-time professionals in areas such as cybersecurity, cloud architecture, or data science—expertise that is critical for keeping up with evolving technologies and protecting sensitive data.

Scalability and Flexibility: The shared-cost model is inherently flexible. As the needs of participating institutions grow or change, they can adjust the scale of the shared infrastructure or services to accommodate new demands. For example, a consortium of academic institutions that grows its online course offerings may decide to increase its shared cloud storage capacity or security coverage, spreading the costs of scaling across multiple schools.

Fostering Collaboration: Shared-cost models create a collaborative environment where institutions can work together to solve common IT challenges. Through these partnerships, academic institutions can share best practices, learn from each other’s experiences, and potentially collaborate on joint research initiatives. This networked approach can lead to innovation and new opportunities for all members.

EXAMPLE

A group of small liberal arts colleges in a regional consortium decide to pool their resources to share the costs of a high-end cybersecurity infrastructure. Individually, each institution could not afford the level of protection needed to safeguard their student data and research. However, by collaborating and dividing the costs, they are able to invest in an advanced firewall system, intrusion detection tools, and threat intelligence services managed by a third-party cybersecurity firm.

In addition to the shared cybersecurity services, the consortium also works together to hire a cloud architect to design and manage a centralized data storage solution that all participating schools can use. This allows each college to store and access large datasets without investing in expensive on-premise servers or cloud solutions on their own. Furthermore, one of the larger academic institutions in the consortium offers its faculty and staff access to its dedicated data scientists to help with advanced research projects. Smaller colleges, which would otherwise not be able to afford such expertise, can now access the knowledge and resources they need to advance their research capabilities.

By sharing the costs of these services and staffing, the colleges reduce their individual expenses while improving their IT infrastructure and security, ultimately offering a better educational experience for their students and faculty.

Institutions can leverage their existing infrastructure, such as a shared or in-place wide-area network (WAN) optical fiber, to create efficiencies and reduce costs. Many academic institutions and colleges already have extensive fiber optic networks connecting campuses and facilities, which can be used to support virtual IT solutions. Instead of investing in redundant infrastructure, institutions can tap into this existing fiber backbone to deliver cloud services, remote IT support, and distributed computing. This approach capitalizes on the scale of the network already in place, ensuring that investments in technology infrastructure are maximized and reducing the need for costly new systems.

HOW IT WORKS

Leveraging Existing Infrastructure: Academic institutions and colleges that already have extensive fiber optic networks connecting their campuses, research centers, and other facilities can maximize the value of these existing assets. Instead of investing in new, separate infrastructure for virtual IT solutions (such as cloud services or distributed computing), institutions can use their existing fiber backbone to deliver these services. The fiber optic network provides high-speed, high-bandwidth connectivity that can support data-heavy operations, such as cloud computing, remote IT support, and large-scale data processing, without needing to build redundant systems.

Connecting Distributed Resources: With a shared or in-place wide-area network (WAN) based on fiber optics, academic institutions can connect multiple campuses, satellite locations, or research facilities into a single unified infrastructure. This allows institutions to deploy virtual IT services across a wide geographic area, such as cloud-based applications, centralized data storage, or remote technical support, with minimal latency and optimal performance.

Maximizing Utilization of Existing Capacity: Instead of building separate, isolated networks for specific IT functions, institutions can consolidate services on their existing fiber networks, which are already equipped to handle large data transfers. For example, a academic institution might use its fiber backbone to deliver distributed computing capabilities, allowing researchers across different campuses to access centralized high-performance computing (HPC) resources. This approach eliminates the need for duplicative investments in physical infrastructure, ensuring that the existing network is fully utilized and cost-effective.

Supporting Virtual IT Solutions: Fiber optic networks are particularly well-suited for supporting virtual IT solutions, such as cloud-based applications, remote data storage, and remote IT support services. Institutions can use their fiber networks to create a high-speed, reliable connection to cloud providers or to operate their own centralized IT infrastructure, reducing the need for extensive on-site hardware and creating efficiencies across their campuses.

BENEFITS

Cost Efficiency: By utilizing existing fiber optic infrastructure, academic institutions can avoid the significant capital expenditures associated with building new, redundant networks or data centers. This can lead to substantial cost savings, as institutions can redirect funds that would have gone toward new infrastructure into other strategic initiatives, such as research programs, academic resources, or student services.

Optimized Resource Utilization: Fiber optic networks already provide high-speed, high-capacity connections between different parts of the institution. By integrating virtual IT services into this infrastructure, academic institutions can maximize the utilization of their existing resources. This ensures that the full potential of their network is realized, reducing the need for over-investment in new systems and allowing the institution to operate more efficiently.

Scalability and Flexibility: Fiber optic networks offer scalability and flexibility, enabling academic institutions to scale their IT services up or down based on demand. Whether expanding cloud storage, deploying more virtual workstations, or supporting remote IT support for a growing student body, academic institutions can easily adjust their use of the network to meet evolving needs. The scalability of fiber optic infrastructure makes it possible to support new technology solutions as they arise, without requiring major new investments in physical infrastructure.

Improved Performance and Reliability: Fiber optic networks are known for their high bandwidth and low latency, which ensures fast and reliable connectivity for virtual IT services. Whether accessing cloud applications, participating in online classes, or conducting large-scale research projects that require high-performance computing, the quality and speed of fiber-optic connections ensure that users have a seamless experience. This high-performance capability is essential for supporting sophisticated IT solutions like remote data storage, distributed computing, and real-time collaboration tools.

Future-Proofing IT Infrastructure: As academic institutions look to implement more advanced technologies, such as AI, big data analytics, or machine learning, leveraging an existing fiber network can help future-proof their IT infrastructure. Fiber optic networks have the capacity to support increasingly data-intensive applications, allowing institutions to adopt new technologies without the need to constantly overhaul their physical infrastructure.

EXAMPLE

A large public academic institution with multiple campuses across a city has already invested heavily in a fiber optic wide-area network (WAN) that connects all of its facilities. Rather than building new, costly infrastructure to support virtual IT services, the academic institution decides to maximize its existing fiber network by utilizing it to deploy cloud-based applications and distributed computing resources across its campuses.

The academic institution integrates high-performance computing (HPC) resources at a centralized location and uses the fiber optic network to provide researchers at different campuses with seamless access to these powerful computing systems. This eliminates the need for each campus to maintain its own computing clusters, saving the academic institution both in hardware costs and ongoing maintenance expenses.

Additionally, the academic institution uses its fiber backbone to provide remote IT support services to students and faculty. By connecting remote helpdesk agents to the network, the academic institution can offer faster troubleshooting and support, without requiring on-site visits or additional resources. With this approach, the academic institution maximizes the utility of its existing fiber infrastructure, avoids redundancy, and creates an efficient, scalable IT ecosystem that can grow as the institution’s needs evolve.

This strategy not only saves costs but also ensures that the academic institution can easily scale its IT services in the future as new demands arise, all while maintaining a high level of performance and reliability for its users.

Higher education institutions, particularly those in close geographic proximity or within certain academic networks, can pool their purchasing power to negotiate better deals on hardware, software, and IT services. For example, academic institutions can collectively buy software licenses, cloud services, or even hardware such as servers, data storage, and network equipment. By collaborating on procurements, institutions can secure volume discounts, shared services agreements, or bundled pricing that reduces individual costs and provides access to higher-end solutions that might otherwise be unaffordable.

HOW IT WORKS

Pooling Purchasing Power: In a cooperative purchasing model, multiple academic institutions or colleges collaborate to buy hardware, software, or IT services in bulk. By aggregating their purchasing needs, these institutions can leverage their collective buying power to negotiate better deals from vendors. This can apply to software licenses, cloud services, network infrastructure, servers, data storage, and other essential IT resources.

Joint Procurement Agreements: Rather than each institution purchasing IT resources independently, they join forces to negotiate shared contracts with suppliers or service providers. This could involve agreeing on bulk software licenses, securing discounted rates for cloud storage, or buying servers and networking equipment at reduced prices through bundled offers. By committing to larger-volume purchases, institutions can access high-end technology at a fraction of the cost.

Shared Service Models: In addition to negotiating better deals on equipment and software, institutions can enter into shared service agreements. For example, several academic institutions in close geographic proximity might collaborate on maintaining shared data centers or network infrastructure, pooling their resources to fund these services while benefiting from the high-quality, enterprise-level capabilities they wouldn’t typically afford on their own.

Standardized Solutions Across Institutions: Cooperative purchasing can also streamline technology adoption by standardizing solutions across multiple institutions. For instance, a group of academic institutions might decide to adopt the same cloud provider or enterprise resource planning (ERP) software, allowing for shared training, better interoperability, and easier support management. This creates operational efficiencies while also providing a unified approach to technology that simplifies implementation and maintenance.

BENEFITS

Cost Savings: The primary benefit of cooperative purchasing is the significant reduction in costs. By pooling their purchasing power, academic institutions can access volume discounts, shared service agreements, and bundled pricing. This can reduce the overall cost of IT infrastructure, software licenses, and services, allowing each institution to stretch its budget further while acquiring more advanced technologies.

Access to Enterprise-Level Solutions: Smaller institutions that may not have the budget to purchase expensive hardware, cloud services, or specialized software on their own can gain access to these tools through cooperative purchasing. By participating in joint procurement, they can benefit from enterprise-level solutions like high-capacity data storage, cutting-edge cybersecurity tools, and large-scale cloud platforms, which would otherwise be out of reach.

Increased Negotiating Leverage: By collaborating with other institutions, academic institutions can negotiate more favorable terms with vendors. Vendors are often willing to offer better pricing, extended warranties, or enhanced support in exchange for securing larger contracts, which can be split across multiple schools. This negotiating power increases as more institutions participate in the purchasing group.

Efficiency and Streamlined Procurement: Cooperative purchasing models streamline the procurement process by reducing the administrative burden. Instead of each institution separately managing procurement contracts, a collective agreement can be made, simplifying the process for all parties. Shared procurement agreements may also include standardized terms for service level agreements (SLAs) and support, making it easier for institutions to manage their technology needs.

Fostering Collaboration and Best Practices: Beyond cost savings, cooperative purchasing fosters collaboration between institutions. As schools share procurement strategies and technology solutions, they can also exchange knowledge and best practices for using these tools effectively. This collaborative approach can lead to better adoption of new technologies, improved system integrations, and stronger relationships among institutions.

EXAMPLE

A group of mid-sized academic institutions Edge members decides to collaborate on purchasing software licenses for a new learning management system (LMS) and cloud-based storage solutions. Individually, each school faces high costs for these technologies, which would strain their budgets. However, by pooling their purchasing power, the consortium can negotiate a bulk discount on LMS licenses and a shared cloud storage contract with a major provider, reducing the cost by nearly 40%.

In addition to the software and storage solutions, the consortium also decides to purchase a set of enterprise-level servers and networking equipment through a cooperative deal with a hardware vendor. The reduced pricing on the hardware allows the group to set up shared data centers, which will host not only their own resources but also services like remote IT support and backup storage for each member institution.

As part of the agreement, the schools also establish standardized processes for onboarding and training staff on the new technologies. The shared knowledge and training resources help all institutions integrate the new LMS and storage solutions quickly and effectively, minimizing disruption and improving the user experience for both faculty and students.

This collaborative approach not only saves the participating academic institutions money but also allows them to access enterprise-level solutions that would have been too expensive to purchase individually. It also enables them to share IT best practices, improve operational efficiencies, and ensure a more consistent technology experience across all campuses.

A growing trend in the virtual IT model is the use of “as-a-Service” solutions, such as Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), and Infrastructure-as-a-Service (IaaS). By leveraging these offerings, institutions can access powerful tools and services without the upfront costs or complexity of managing them in-house. For example, academic institutions can procure software platforms for learning management, student information systems, or research tools as a service, reducing the need for internal development, maintenance, and support. Similarly, IaaS offerings like virtual servers or data storage platforms allow institutions to scale their infrastructure on-demand, paying only for what they use. By joining procurement cooperatives, academic institutions can negotiate better pricing for these as-a-service solutions.

Procurement cooperatives allow institutions to aggregate their needs across multiple entities, achieving volume pricing for software, platforms, and infrastructure that they may not be able to secure individually. This strategy not only reduces costs but also enables access to enterprise-level technology without the associated complexity of managing hardware, software updates, and security concerns internally.

HOW IT WORKS

Leveraging “As-a-Service” Solutions: Institutions are increasingly adopting “as-a-Service” (aaS) offerings—such as Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), and Infrastructure-as-a-Service (IaaS)—to access powerful IT tools and platforms without the burden of managing them internally. These cloud-based solutions allow academic institutions to subscribe to software, platforms, or infrastructure on-demand, eliminating the need for large upfront investments, ongoing maintenance, and internal support staff.

SaaS: Academic institutions can procure software applications such as learning management systems (LMS), student information systems (SIS), or research collaboration tools directly from cloud providers. These services are updated automatically, reducing the institution’s workload and ensuring access to the latest features without worrying about infrastructure or software patches.

PaaS: Academic institutions can use cloud platforms to develop and deploy applications without the need to manage underlying hardware or software. PaaS offerings provide the tools and frameworks needed for custom development, such as database management systems, programming environments, and hosting platforms, which are maintained and updated by the service provider.

IaaS: Institutions can rent virtualized computing resources (e.g., virtual machines, data storage, and network infrastructure) on-demand. This allows them to scale infrastructure up or down based on current needs, paying only for what is used. For example, a academic institution conducting a large research project may scale up storage and computing power temporarily, then scale down once the project concludes.

Procurement Cooperatives: Academic institutions can participate in procurement cooperatives such as EdgeMarket to aggregate their collective needs for these as-a-Service solutions. By joining together with other institutions, they can negotiate better pricing, secure volume discounts, and access bundled offerings that would be difficult to obtain individually. Through cooperative procurement, institutions can pool their software and infrastructure demands, allowing them to benefit from economies of scale.

Eliminating the Need for In-House Management: One of the key advantages of “as-a-Service” solutions is that the service provider manages the hardware, software updates, and security concerns. Academic institutions can focus on utilizing the services rather than worrying about their maintenance. For example, with SaaS for student management, the cloud provider handles updates, security patches, and compliance, while the academic institution benefits from the service’s functionality.

BENEFITS

Cost Savings: “As-a-Service” models reduce the upfront capital costs of purchasing software, hardware, and platforms. Institutions only pay for what they use, which means they can avoid large investments in infrastructure and scale their IT resources according to demand. When combined with procurement cooperatives, academic institutions can further reduce costs by securing volume pricing, ensuring that they pay less for the same services.

Access to Enterprise-Level Technology: Smaller institutions or those with limited budgets can access powerful, enterprise-level software, platforms, and infrastructure without the need for substantial investments in on-premise solutions. By subscribing to SaaS, PaaS, and IaaS, academic institutions gain access to cutting-edge technologies such as data analytics platforms, advanced security tools, and machine learning frameworks that might otherwise be out of reach.

Scalability and Flexibility: “As-a-Service” offerings provide institutions with the ability to scale their IT resources as needed. Whether for a specific research project, peak enrollment periods, or expanding online course offerings, academic institutions can quickly adjust their service levels. For instance, IaaS allows for on-demand provisioning of virtual servers or storage, which can be scaled up during busy periods and scaled down when demand decreases.

Reduced Complexity and Maintenance: Managing IT infrastructure, software updates, and security can be time-consuming and resource-intensive. By shifting to as-a-Service models, academic institutions can offload these responsibilities to service providers. The cloud provider takes care of patching, upgrading, and securing the software, allowing the institution’s IT staff to focus on more strategic initiatives, such as supporting faculty and students or advancing research.

Faster Time to Implementation: Cloud-based solutions are typically quicker to implement than traditional on-premise systems. Academic institutions can quickly deploy SaaS for administrative tasks, PaaS for custom application development, or IaaS for infrastructure, without the need for long installation times or complex configurations. This speed allows institutions to respond more rapidly to evolving needs and opportunities.

EXAMPLE

A mid-sized academic institution wants to implement a new student information system (SIS) but is concerned about the costs and complexity of managing the system internally. Instead of purchasing expensive on-premise software and building the necessary infrastructure, the academic institution decides to adopt a SaaS-based SIS. This allows them to access the system immediately without the need for installing or maintaining hardware, while ensuring the software is always up to date with the latest features and compliance standards.

Additionally, the academic institution joins the Edge procurement cooperative that includes other academic institutions in its area. Through this cooperative, the academic institution is able to secure a volume discount on the SaaS subscription, typically reducing the annual cost by 25%. The Edge procurement cooperative also negotiates bundled services with a cloud provider for additional PaaS offerings, allowing the academic institution to develop and deploy custom applications for faculty and research teams without the need to manage its own development environment.

On the infrastructure side, the academic institution uses an IaaS provider to rent virtual servers and storage for hosting its website, databases, and research data. This provides the academic institution with the flexibility to scale up its infrastructure during periods of heavy research activity or during new course launches. When the demand for infrastructure subsides, the academic institution scales down its services, ensuring that it only pays for the resources it uses.

By combining SaaS, PaaS, and IaaS, and leveraging cooperative purchasing, the academic institution gains access to high-end IT solutions, reduces its upfront capital expenditure, and offloads the complexity of management and maintenance to trusted service providers. This enables the institution to focus more on its core mission of teaching, learning, and research while benefiting from enterprise-level technologies at an affordable cost.

Refers to the practice of supplementing an institution’s internal workforce with external professionals or specialized contractors for a temporary period. For academic institutions and colleges adopting Virtual IT, staff augmentation allows them to scale their IT capacity and capabilities without incurring the overhead costs of hiring full-time, permanent staff.

HOW IT WORKS

Specialized Expertise on Demand: Higher education institutions may need highly skilled IT professionals (e.g., cybersecurity experts, cloud architects, data scientists, etc.) but may not justify the cost of maintaining a permanent team for every specialized area. Through staff augmentation, academic institutions can access specific expertise when needed, such as for a new technology implementation or a cybersecurity audit.

Flexible Workforce Management: By bringing in external staff on a temporary or project basis, institutions can adapt to fluctuating needs without the long-term commitment and costs associated with hiring permanent employees.

Cost-Effective Scaling: Staff augmentation can help manage fluctuating workloads, especially during periods of peak demand, like new semester launches, system upgrades, or faculty transitions. This flexibility ensures the institution only pays for the expertise and support it needs when it needs it.

BENEFITS

Cost Control: Institutions only pay for the skills and time needed, avoiding the fixed costs of permanent salaries, benefits, and long-term commitments.

Rapid Response: Staff can be brought in quickly, filling gaps or supporting urgent projects like software rollouts, data migration, or infrastructure upgrades.

Access to Global Talent: Institutions can source specialized professionals from around the world, which is particularly beneficial when local talent pools may be limited or too expensive.

EXAMPLE

A small liberal arts college might not have a full-time network engineer but could bring in an external consultant to oversee its networking upgrades or ensure compliance with new data privacy laws. This allows them to avoid the costs associated with hiring a permanent staff member while still benefiting from top-tier expertise.

Co-location Space and Storage: Refers to the practice of outsourcing the physical infrastructure required for data storage, computing, and networking to a third-party provider. Instead of maintaining large data centers on campus, academic institutions and colleges can rent space in a co-location facility, where their hardware is housed alongside other clients’ equipment. This approach ensures robust infrastructure, high reliability, and scalability without the substantial upfront costs.

Academic institutions can place their servers, storage systems, and networking equipment in a co-location facility, where providers offer high-performance infrastructure like power supply, cooling, security, and network bandwidth. The academic institution maintains control over its equipment and software but doesn’t have to manage the physical environment.

HOW IT WORKS

Scalable Storage Solutions: Institutions can scale their storage needs up or down as required. For example, they can add additional servers or storage arrays during peak data usage times (e.g., during admissions or research project cycles), but without the capital expenditure required for on-site infrastructure.

Off-site Disaster Recovery: Co-location facilities often offer disaster recovery and backup services, ensuring that critical data is stored safely and can be quickly restored in the event of a disruption. This reduces the need for costly redundant systems on campus and improves data security.

BENEFITS

Cost Savings: Co-location facilities provide economies of scale that can make high-quality data center services more affordable for smaller institutions. They eliminate the need for expensive capital investment in on-campus data centers and reduce ongoing operational costs like power, cooling, and staffing.

Reliability & Redundancy: Co-location facilities typically offer high levels of redundancy in terms of power, network connectivity, and cooling. This ensures that critical IT systems remain up and running even in the event of local disruptions or hardware failures.

Security & Compliance: Co-location providers invest heavily in security, with 24/7 monitoring, biometric access controls, fire suppression systems, and strict compliance with regulations (e.g., FERPA, HIPAA). This is especially important for academic institutions handling sensitive student and research data.

EXAMPLE

A small to mid-sized academic institution might use a co-location facility to store and back up large research datasets and host its virtual learning environment (VLE). By using an off-campus data center, they can reduce the financial burden of maintaining an on-campus facility while benefiting from advanced security, uptime guarantees, and scalability. This approach allows the academic institution to focus on its core educational and research missions without worrying about managing physical infrastructure.

In Summary

Why Consider Edge’s Virtual IT Solutions and Services?

Cost Savings: The most compelling argument for outsourcing IT functions is financial. By offloading certain services to third-party providers, institutions can reduce their overall IT expenses, including salaries, benefits, and the costs associated with maintaining on-site hardware and software. Outsourcing eliminates the need for large capital expenditures in IT infrastructure and the operational overhead required to support it.

Access to Expertise: IT outsourcing providers often specialize in specific technology areas, ensuring that institutions have access to highly skilled professionals who are up-to-date with the latest developments in technology. Outsourcing also allows schools to quickly adapt to new technologies and innovations without having to hire and train internal staff to keep pace.

Scalability and Flexibility: Virtual IT solutions offer a level of scalability that is hard to achieve with in-house teams. As a academic institution’s needs change — whether due to fluctuating student enrollment, new academic programs, or shifts in research priorities — IT services can be scaled up or down efficiently. Cloud solutions, for instance, provide on-demand resources, allowing academic institutions to adjust their infrastructure based on usage, without needing to invest in unused capacity.

Improved Focus on Core Mission: By outsourcing IT functions, higher education leaders can focus more on their institution’s core mission — teaching, research, and student services — rather than diverting attention to the complexities of maintaining a high-functioning IT department. Virtual IT helps ensure that technology systems run smoothly, so faculty and staff can focus on their primary responsibilities.

Enhanced Security and Reliability: Many third-party IT providers offer robust security measures and compliance with industry standards that might be challenging for individual institutions to maintain on their own. With the increasing threats of cyberattacks and data breaches, outsourcing to a specialized provider can ensure that academic institutions have access to the latest security technologies and practices.

Global Collaboration and Innovation: Outsourcing IT functions can also enhance collaboration by tapping into a global pool of talent. Academic institutions can engage in international research partnerships or offer virtual courses and services more easily, as their IT infrastructure can support global engagement.

Conclusion

In the face of significant financial challenges, higher education institutions must adopt innovative strategies to ensure their sustainability and continued success. Virtual IT solutions offer a promising pathway for institutions to reduce operational costs, improve flexibility, and enhance technological capabilities, all while reallocating resources toward their core educational mission. By outsourcing key IT functions—such as infrastructure management, cybersecurity, and support services—institutions can not only achieve substantial cost savings but also gain access to specialized expertise, improved scalability, and enhanced security, which are increasingly critical in today’s rapidly evolving technological landscape.

As the financial pressures on colleges and universities continue to mount, the adoption of Virtual IT solutions presents an opportunity to meet these challenges head-on. Cloud-based platforms, outsourcing IT services, and adopting virtualized infrastructures allow institutions to remain agile, competitive, and responsive to the changing needs of students, faculty, and the broader educational ecosystem. Moreover, these strategies enable institutions to focus on their primary goals—delivering high-quality education, fostering research, and supporting student success—while entrusting their technology needs to expert providers.

Ultimately, Virtual IT offers higher education institutions a way to embrace technological advancements without the prohibitive costs associated with maintaining large in-house IT departments. By leveraging these solutions, institutions can ensure they remain financially viable and continue to fulfill their mission in an increasingly complex and competitive higher education environment. The strategic integration of Virtual IT can be a key enabler in transforming challenges into opportunities, allowing colleges and universities to thrive in the years to come.

About Edge

Edge serves as a member-owned, nonprofit provider of high performance optical fiber networking and internetworking, Internet2, and a vast array of best-in-class technology solutions for cybersecurity, educational technologies, cloud computing, and professional managed services. Edge provides these solutions to colleges and universities, K-12 school districts, government entities, hospital networks and nonprofit business entities as part of a membership-based consortium. Edge’s membership spans the northeast, along with a growing list of EdgeMarket participants nationwide. Edge’s common good mission ensures success by empowering members for digital transformation with affordable, reliable and thought-leading purpose-built, advanced connectivity, technologies and services.

The post The Edge Ecosystem of IT Virtualization Solutions and Services appeared first on NJEdge Inc.


GS1

Enhancing safer travel with predictive maintenance in transportation

Enhancing safer travel with predictive maintenance in transportation In partnership with OriginTrail and GS1 Switzerland, Swiss Federal Railways (SBB) has taken important steps to make train travel safer and more sustainable. Managing railways requires handling vast amounts of data from multiple operators—often in real t
Enhancing safer travel with predictive maintenance in transportation In partnership with OriginTrail and GS1 Switzerland, Swiss Federal Railways (SBB) has taken important steps to make train travel safer and more sustainable.

Managing railways requires handling vast amounts of data from multiple operators—often in real time—creating operational complexity and risk. To address this challenge, SBB introduced a specialized EPCIS repository powered by the OriginTrail Decentralized Knowledge Graph (DKG).

Already in production since 2021, this EPCIS repository—actively used by SBB and its partners—has demonstrated clear improvements in safety and sustainability. It enables seamless train tracking across the EU by querying a network of railway GS1 EPCIS repositories, supports advanced analytics for predictive maintenance of train wheels using IoT data, and facilitates the tracking of welding and maintenance events across the Swiss Federal Railways network with input from over 10 welding partners. Additionally, it enables event tracking for Forged Tongues, further enhancing operational reliability.

gs1_use-case_rail_origintrail.pdf

Thursday, 13. February 2025

We Are Open co-op

Systemic complexities with AI

This series is a brief look at the content within Harnessing AI for Environmental Justice, a report WAO developed for Friends of the Earth, funded by Mozilla. Read or download the full report here. Check out Part 1, Understanding Predominant Narratives in AI. In this second part, we look at the systemic complexities around environmentalism and technology. Understanding Predominant Narrativ

This series is a brief look at the content within Harnessing AI for Environmental Justice, a report WAO developed for Friends of the Earth, funded by Mozilla. Read or download the full report here.

Check out Part 1, Understanding Predominant Narratives in AI. In this second part, we look at the systemic complexities around environmentalism and technology.

Understanding Predominant Narratives in AI Systemic complexities with AI Starting principles for the ethical use of AI How to be an Activist in a World of AI

When viewed with all of the facts, nothing is simple. The same is true with AI, and everything else related to environmental justice and digital rights. As a result, we need to be aware of complexity.

The climate crisis disproportionately affects low-income communities, women and marginalised communities that are already affected by systemic inequalities. (Kazansky, 2022). We know that ‘black-box’ algorithms, a lack of regulation of corporate conglomerates, and obscured climate impact reporting is common within the tech industry. (Kazansky, 2022). Companies are not obligated to disclose essential information in the proprietary nature of their AI development, including labour practices in the AI supply chain and procurement of materials or rare minerals. (Kazansky, 2022). Digital rights activists tell us how data privacy and surveillance is affecting our most vulnerable communities as well as wider society.

These issues are systemic, touch many different policy points, and are best addressed through thoughtful coalitions and the important, painstaking work of activism.

Cloud Computing by Nadia Piet + AIxDESIGN & Archival Images of AI Climate and Energy

AI is being used to address the climate emergency and reduce energy consumption in various ways. For example, the UN talks about improving prediction modelling for how climate change is impacting our societies using machine learning and predictive AI. (United Nations, 2023). Other companies are using similar technologies for helping prevent wildfires, promoting reforestation or improving early warning systems. (Climate One, 2024).

However, none of these are using generative AI for positive climate action and to tackle the rising energy consumption required by new data centres.

Journalists from across the environmental and tech sectors repeatedly emphasise the significant energy consumption associated with generative AI models like ChatGPT. They point out that the energy and water usage to run and cool data centres is growing exponentially because of AI, and that by 2027 the sector will have the same annual energy demand as the Netherlands. (Vincent, 2023). However, it’s important to understand that, just as when we compare one country’s emissions with the rest of the world, data centres are responsible for a small percentage of total global energy-related emissions.

This does not mean that there is nothing to concern us here. As we have not yet transitioned to global renewable energy, increased energy usage incentivises the use of fossil fuels. (Kazansky, 2022). Indeed fossil fuels are still responsible for over 80% of our global energy needs. (Ritchie, 2024). The energy mix in some places is increasingly renewable, whereas other places are almost entirely dependent on approaches emitting high levels of CO2.

Nature and Environment

AI can be used to help in everything from conservation efforts with endangered species (Wildlife.AI, 2024) through to dealing with controlling pests such as the Desert Locust. (PlantVillage, 2024). In these cases, AI usually takes the form of object detection and tracking, computer vision, and machine classification. This is often combined with cloud computing, analytics and satellite intelligence to provide insights for farmers, activists, and conservations to take action.

Generative AI is not central to these efforts, but the energy being used to generate synthetic text, images, and video is a mainstream issue. Related is the issue of resource consumption, which tends to receive less attention. This is particularly true of freshwater usage, as data centres use water for cooling, and the explosion of generative AI technologies has exacerbated freshwater water scarcity. The trend shows no signs of slowing. (Li, Pengfei, et al., 2023).

Local communities in areas where data centres are built deal with a variety of ecological and social issues due to this resource consumption. Human communities, farm lands, natural areas and biotopes suffer as Big Tech works to hide its water usage. (Smith & Adams, 2024). Data centres are competing with these local communities and ecosystems for scarce water resources, especially in regions already experiencing drought. (HuggingFace, 2024). This competition puts a further strain on water availability and poses a threat to biodiversity in these vulnerable areas.

Clean technology and innovative, cyclical solutions in resourcing is critical in the protection of biodiversity.

Rights and Justice

Machine learning, image recognition systems and predictive analytics can be used to work on a number of issues related to the UN’s Sustainable Development Goals (SDGs). For example, issues relating to hunger, education, health and well-being can be addressed in part by AI technologies. We are beginning to see innovative uses of AI such as helping to identify or predict illnesses as well as showing us ways to reduce food waste.

However, good examples of how generative AI are supporting such progressive initiatives are hard to come by. When it comes to rights and justice, it is much easier to unpack how generative AI is causing harm. Alongside the more visible issue of resource use, a slew of discriminatory and exploitative practices are made worse by the AI boom.

The waste and pollution, ecosystem collapse and worker exploitation associated with AI advancements disproportionately impact the communities who are already most vulnerable to the climate crisis.

Civil society organisations and communities play an important role in ensuring that extractive practices, in the context of resource extraction, labour and further colonialism, are pivotal in shaping AI development.

There’s more to read in the full report. Read or download it here.

Technology has changed so much about our world. Now, AI is changing things again and at hyperspeed. We work to understand its potential implications for our communities, learners and programmes. Do you need help with AI Literacies, strategy, or storytelling? Get in touch!

References

Kazansky, B., et al. (2022). ‘At the Confluence of Digital Rights and Climate & Environmental justice: A landscape review’. Engine Room. Available at: https://engn.it/climatejusticedigitalrights (Accessed: 24 October 2024)

United Nations (2023). ‘Explainer: How AI helps combat climate change’. UN News, 3rd November. Available at: https://news.un.org/en/story/2023/11/1143187 (Accessed: 24 October 2024).

Climate One (2024) ‘Artificial Intelligence, Real Climate Impacts’ [podcast], 19th April. Available at: https://www.climateone.org/audio/artificial-intelligence-real-climate-impacts (Accessed: 24 October 2024).

Ritchie, H., Rosado, P., & Roser, M. (2024). Renewable Energy. Our World in Data. Available at: https://ourworldindata.org/renewable-energy (Accessed: 28 November 2024)

Wildlife.AI — Using artificial intelligence to accelerate conservation. (n.d.). Wildlife.AI. Available at: https://wildlife.ai/ (Accessed: 11 December 2024).

Plantvillage. Penn State University (n.d.). Available at: https://plantvillage.psu.edu (Accessed: 11 December 2024).

Li, Pengfei, et al. (2023) Making AI Less “Thirsty”: Uncovering and Addressing the Secret Water Footprint of AI Models. Available at: https://arxiv.org/pdf/2304.03271 (Accessed: 20 November 2024)

Smith, H. & Adams, C. (2024) ‘Report: Thinking about using AI?’. The Green Web Foundation. Available at: https://www.thegreenwebfoundation.org/publications/report-ai-environmental-impact/ (Accessed: 24 October 2024).

Hugging Face (2024) The Environmental Impacts of AI. Available at: https://huggingface.co/blog/sasha/ai-environment-primer (Accessed: 24 October 2024).

Systemic complexities with AI was originally published in We Are Open Co-op on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wednesday, 12. February 2025

FIDO Alliance

Thales Launches FIDO Key Management Solution for Enterprise Passwordless Authentication

Thales has unveiled a new solution designed to streamline the deployment and management of FIDO security passkeys for large-scale implementations. The OneWelcome FIDO Key Lifecycle Management solution enables organizations to […]

Thales has unveiled a new solution designed to streamline the deployment and management of FIDO security passkeys for large-scale implementations. The OneWelcome FIDO Key Lifecycle Management solution enables organizations to efficiently manage the complete lifecycle of FIDO keys while transitioning to passwordless authentication systems. The launch follows Thales’ previous efforts in passwordless authentication, expanding their enterprise security portfolio.

The solution provides IT teams with comprehensive control over FIDO key management, from initial enrollment through to eventual revocation. By allowing IT departments to pre-register keys and handle lifecycle management tasks, the platform helps reduce the burden on end users while maintaining security standards. The approach supports recent FIDO Alliance guidelines for enterprise passkey implementation, which emphasize the importance of streamlined deployment processes.

A key feature of the solution is its integration with Microsoft Entra ID through FIDO2 provisioning APIs, enabling organizations to pre-register Thales FIDO keys for their users. The integration is particularly relevant for enterprises using Microsoft 365, providing secure authentication capabilities from initial deployment. The feature arrives as Microsoft implements mandatory multi-factor authentication across its enterprise platforms.


The Engine Room

Announcing our two new Matchbox partners

We’re excited to welcome two partners to our Matchbox Program this year: Trans Youth Initiative-Uganda (TYI-Uganda) and TechHer. The post Announcing our two new Matchbox partners appeared first on The Engine Room.

We’re excited to welcome two partners to our Matchbox Program this year: Trans Youth Initiative-Uganda (TYI-Uganda) and TechHer.

The post Announcing our two new Matchbox partners appeared first on The Engine Room.


Project VRM

A simple plan to de-enshittify CVS

Fifteen years ago, The Onion published this story: Study Finds Paint Aisle At Lowe’s Best Place To Have Complete Meltdown Now it’s the vitamin aisle at CVS: When Cory Doctorow coined enshittification, he was talking about how online platforms such as Google, Amazon, and TikTok get shitty over time. Well, the same disease also afflicts […]

Fifteen years ago, The Onion published this story:

Study Finds Paint Aisle At Lowe’s Best Place To Have Complete Meltdown

Now it’s the vitamin aisle at CVS:

Enshittification at work.

When Cory Doctorow coined enshittification, he was talking about how online platforms such as Google, Amazon, and TikTok get shitty over time. Well, the same disease also afflicts many big retail operations, especially ones that flood their zones with discounts and other gimmicks, enshittifying what marketers call “the customer experience” (aka CX).

Take the vitamin aisle, above. The only people who will ever get down on all fours to read the yellow tags near the floor are the cursed employees who have to creep the length of the aisle putting them there.

For customers, the main cost of shopping at CVS is cognitive overhead. Think about—

All those yellow stickies All the slow-downs at check-out when you wave your barcode at a reader, punch your phone number into a terminal, or have a CVS worker to do the same All the conditionalities in every discount or “free” offer that  isn’t All the yard-long receipts, such as this one:

And the app!

OMFG,  all we really need the damned app for is the one CVS function our life depends on: the pharmacy.

To be fair, the app doesn’t suck at the basics (list of meds, what needs to be refilled, etc.). But it does suck at helping you take advantage of CVS’s greatest strength: that there are so many of them. Specifically,

While that’s down a bit from the peak in 2021, CVS is still the Starbucks of pharmacies. And while they are busy laying off people while investing in tech, you’d think they would at least make it easy to move your prescription from one store to another, using the app.  But noooo.

What the app is best for is promotional crap. For example, this:

Look at the small gray type in the red oval: 198 coupons!

After I scroll down past the six Extrabucks Rewards (including the two above), I get these:

First, who wants a full-priced item when it seems damn near everything is discounted?

Second, you’d think after all these years of checking out with my Extracare barcode, and the app shows me (under “Buy It Again”) all the stuff I’ve purchased recently, that CVS would know I am a standard-issue dude with no interest in cosmetics. So why top the list of coupons with that shit? I suppose it’s to make me scroll down through the other 178 coupons to find other stuff I might want at a cheaper price.

I just did that and found nothing. Why? Because most of the coupons are for health products I already bought or don’t need. (I’m not sick right now.) Also, almost all of the coupons (as you see) expire three days from now.

Now think about the cognitive and operational overhead required to maintain that whole program at CVS. Good gawd.

And is it necessary? At all? When you’re the Starbucks of pharmacies?

Without exception, all loyalty programs like this one are coercive. They are about trapping and milking customers.

But do stores need them? Do customers? Does CVS?  Really? When its customers are already biased by convenience.

Pro Tip: Real loyalty is earned, not coerced.

Want your store, or your chain, to be loved? Take some lessons from the most loved chain in the country: Trader Joe’s. In a chapter of The Intention Economy called “The Dance,” I list some reasons why TJ’s is loved by its customers. My main source for that list is Doug Rauch, the retired president of TJ’s, where he worked for 31 years. Here are the top few:

They never use the word “consumer.” They call us “customers,” “persons,” or “individuals.” They have none of what Doug calls “gimmicks.” No loyalty programs, ads, promotions, or anything else that manipulates customers, raises cognitive overhead or insults anyone’s intelligence. In other words, none of what marketing obsesses about. “Those things are a huge part of retailing today, and have huge hidden costs,” Doug says. (I think the company’s biggest marketing expense is its float in the Rose Parade.) They never discount anything, or say anything is “on sale.” Those kinds of signals add more cognitive overhead. TJ’s wants customers not just to assume, but to know. A single price takes care of that. They have less than no interest in industry fashion. TJ’s goes to no retail industry meetings or conferences, belongs to no associations, and avoids all settings where the talk is about gaming customers. That’s not TJ’s style because that’s not its substance. They believe, along with Cluetrain, that markets are conversations—with customers. Doug told me his main job, as president of the company, was “shopping along with customers.” That’s how he spent most of his time. “We believe in honesty and directness between human beings…We do this by engaging with the whole person, rather than just with the part that ‘consumes….We’ll even open packages with customers to taste and talk about the goods.” As a result, “There’s nothing sold at Trader Joe’s that customers haven’t improved.”

Then there’s what Walmart CEO Lee Scott told me in 2000 (at this event) when I asked him “What happened to K-Mart?” From The Intention Economy:

His answer, in a word, was “Coupons.” As Lee explained it, K-Mart overdid it with coupons, which became too big a hunk of their overhead, while also narrowing their customer base toward coupon-clippers. They had other problems, he said, but that was a big one. By contrast, Wal-Mart minimized that kind of thing, focusing instead on promising “everyday low prices,” which was a line of Sam Walton’s from way back. The overhead for that policy rounded to zero.

Which brings me to trust.

We trust Trader Joe’s and Walmart to be what they are. In simple and fundamental ways, they haven’t changed. The ghosts of Joe Coloumbe and Sam Walton still run Trader Joe’s and Walmart. TJ’s is still the “irreverent but affordable” grocery store Joe built for what (in his book) Joe called “the overeducated and underpaid,” and based in Los Angeles. Walmart is still Sam’s five-and-dime from Bentonville, Arkansas. (Lee Scott told me that.)

CVS’s equivalent to Joe and Sam was Ralph Hoagland, a good friend of good friends of ours in Santa Barbara. All of us also shared history around Harvard and Cambridge, where Ralph lived when he co-founded CVS, which stood for Consumer Value Store, in 1963. In those days CVS mostly sold health and beauty products, cheaply. I remember Ralph saying the store’s main virtue was just good prices on good products. Hence the name.

CVS can do a much better job of signaling bargain prices by just making them as low as possible, on the model of Trader Joe’s and Walmart.

I think there is also a good Health position for CVS: one that bridges its health & beauty origins and its eminence as the leading pharmacy chain in the U.S. And it could rest on trust.

I’m thinking now about tech. Specifically, FPCs, for First-Person Credentials. Read what Jamie Smith says about them in his Customer Futures newsletter under the headline The most important credentials you’ve never heard of. Also check out—

What I wrote last year about Identity as Root What DIF is doing What Ayra is doing Other stuff you’ll be hearing about first-person credentials (but isn’t published yet) when you come to the next IIW (April 8-10). What you’ll be learning soon about re-basing everything (meaning every SKU, as well as every person) on a new framework that is far more worthy of trust than any of the separate collections of records, databases, and namespaces that currently divides a digital world that desperately needs unity and interop—especially around health. And::: MyTerms, which is the new name for IEEE P7012, the upcoming standard (for which I am the working group chair) that should become official later this year, though nothing prevents anyone from putting its simple approach to work.

MyTerms can be huge and world-changing because it flips around the opt-out consent mechanisms that have been pro forma since industry won the industrial revolution and metastasized in the Digital Age. With MyTerms, the sites and services of the world agree to your terms, not the other way around. With MyTerms, truly trusting relationships can be established between customers and companies. This is why I immodestly call it the most important standard in development today.

So I have five simple recommendations for CVS, all to de-enshittify corporate operations and customer experiences:

Drop the whole loyalty thing. Completely. Cold turkey. Hell, fire the marketing department. Put the savings into employees you incentivize to engage productively (not promotionally) with customers. And publicize the hell out of it. Should be fun. Confine your research to what your human employees learn directly from their human customers. Be the best version of what you are: a great pharmacy/convenience store chain that’s still long in health and beauty products. Simplify the app by eliminating all the promotional shit, and by making it as easy as possible for customers to move prescriptions from one  CVS store to another. Watch what’s happening with first-person credentials and MyTerms. Getting on board with those will make CVS a leader, rather than a follower.

Coupon-clipping addicts may feel some pain at first, but if you market the new direction well—making clear that you have “everyday low prices” rather than annoying and labor-intensive discounts (many of which expire in three days), customers will come to love you.


Oasis Open

Q&A with William Parducci

William Parducci was recently honored as a 2024 OASIS Distinguished Contributor. Since beginning his journey with OASIS in 2000, Bill has been a key player in advancing open standards and open source solutions. In this Q&A, Bill reflects on his experiences, the challenges of fostering voluntary adoption of open standards, and his vision for the […] The post Q&A with William Parducci appe

William Parducci was recently honored as a 2024 OASIS Distinguished Contributor. Since beginning his journey with OASIS in 2000, Bill has been a key player in advancing open standards and open source solutions. In this Q&A, Bill reflects on his experiences, the challenges of fostering voluntary adoption of open standards, and his vision for the future of the industry.

What inspired you to get involved with OASIS and contribute to open source and open standards?
I joined OASIS in 2000. At the time, I was working on some large, open enterprise projects where my team was trying to introduce Linux and Apache as the basis for a solution. Despite the resistance to open source software back then, it was clear to me that openly developed, community-driven technology could dramatically increase adoption by making these solutions more accessible.

I discovered OASIS while working on an automotive industry initiative to standardize XML as a data transmission protocol. Seeing that need overlap with the “open source” philosophy—particularly for security—really resonated with me.

Can you share a defining moment or achievement in your work at OASIS that you’re especially proud of?
The release of XACML 1.0 is something I’m particularly proud of. There was significant involvement from major technology companies, but at the same time, there was a lot of hesitation toward open source and a strong desire to protect intellectual property. Our ability to address those concerns and still create a specification that laid the groundwork for robust authorization policies demonstrated the feasibility of developing meaningful open standards while respecting the IP concerns of all stakeholders.

What challenges have you encountered in advancing open standards, and how have you overcome them?
The most significant challenges I’ve faced involve voluntary adoption. In scenarios where conformance to a specification is mandatory, the process is more straightforward; however, when adoption is voluntary, it requires greater foresight and buy-in from management.

To address this, I’ve demonstrated how open standards can reduce future integration costs, streamline migrations between products, and deliver long-term business value. Over the years, I’ve learned that simply having a standard in place isn’t always sufficient to convince upper management—rather, it’s often seen as an expense that must be justified before it’s fully accepted.

How do you see the role of open standards evolving in the next five to ten years?
I’m optimistic on many fronts. Government agencies and international cooperatives will continue to benefit from open standards because there are countless areas—almost endless, really—where a common lingua franca is generally advantageous.

However, in the “identity” space, I’m slightly less optimistic. For instance, I’m a huge fan of SAML and often advocate for its elegant, consistent approach in corporate environments. But in the “public” space, I see OAuth variants continuing to grow as identity management consolidates around the largest technology providers. While OAuth is “open,” implementations have forked, with each provider customizing it for their own needs.

What advice would you give to new contributors looking to make an impact in the OASIS community?
Focus on something you’re passionate about that goes beyond your immediate work requirements. In my experience, those who contribute in this way tend to come up with the most inspired and innovative solutions—and often spot potential flaws that others miss.

Beyond your work with OASIS, what other projects or innovations in technology are you passionate about?
Although I’m passionate about security-related technologies, my time in the market research industry sparked a keen interest in semantic analysis. That interest has only grown with the rapid advancement of AI. In just a few years, we’ve gone from locally implementing open source tools such as Gensim on our own machines to leveraging 100B+ parameter models through simple API calls. It’s been incredible to witness.

What does receiving this Distinguished Contributor award mean to you personally and professionally?
Professionally, I’m honored to be recognized for my efforts to promote open solutions. On a personal level, it’s gratifying because it acknowledges the time and effort I devoted to the TC, even when my corporate responsibilities lay elsewhere.

Is there anything else you’d like to share with the OASIS community?
I have really enjoyed my time working with the OASIS team as well as those on the TCs. To say that it’s a varied set of characters would be an understatement, then to see the common desire to create something that is meaningful emerge as the work begins is what fuels my passion for this work.

The post Q&A with William Parducci appeared first on OASIS Open.


FIDO Alliance

Yuno Rolls Out Mastercard Payment Passkey in Latin America to Combat Fraud and Streamline Checkouts

Global payments orchestrator Yuno is launching the Mastercard Payment Passkey Service across Latin America, enabling merchants in the region to streamline online checkout processes and enhance fraud protection. Following the launch by Yuno, merchants […]
Global payments orchestrator Yuno is launching the Mastercard Payment Passkey Service across Latin America, enabling merchants in the region to streamline online checkout processes and enhance fraud protection.

Following the launch by Yuno, merchants in Brazil, Argentina, and Chile can replace increasingly vulnerable traditional authentication methods, such as one-time passwords, with Mastercard Payment Passkey Service, which uses device-based biometrics, such as fingerprints and facial recognition already available on smartphones, to authenticate purchases.

Mastercard Payment Passkey Service also leverages tokenisation technology to ensure that sensitive data is never shared with third parties and remains useless to fraudsters in the event of a data breach, making transactions even more secure.

This technology promises to not only boost the security of online transactions, but also to significantly reduce cart abandonment rates by increasing convenience for merchants’ customers.


Goodbye to manual card entry: Mastercard reveals when the new era of one-click online payments begins

Changes are on the way for online shopping and e-commerce. The traditional way of paying for items online by typing in your credit card details (card number and CVV security code) […]

Changes are on the way for online shopping and e-commerce. The traditional way of paying for items online by typing in your credit card details (card number and CVV security code) will soon be a thing of the past.

Mastercard and other card payment companies will be introducing a one-click button that will work on any online platform.

One of the reasons why services will be moving to a one-click system is to deter hackers who target merchant sites to steal consumer card information. According to a 2023 study by Juniper Research, merchant losses from online payment fraud will exceed $362 billion globally between 2023 to 2028, with losses of $91 billion alone in 2028.

The one-click system will protect consumers and their online data.


OwnYourData

Empowering Digital Product Passports: OwnYourData’s Role in Secure and Interoperable Data Ecosystems

We are excited to announce OwnYourData’s involvement in the groundbreaking PACE-DPP project: Promoting Accelerated Circular Economy through Digital Product Passports, spearheaded by Virtual Vehicle. As a key consortium partner, OwnYourData is leading the crucial work focused on Data Intermediaries in the work package 4 DPP Data Ecosystem. PACE-DPP  is motivated by providing guardrails and sol

We are excited to announce OwnYourData’s involvement in the groundbreaking PACE-DPP project: Promoting Accelerated Circular Economy through Digital Product Passports, spearheaded by Virtual Vehicle. As a key consortium partner, OwnYourData is leading the crucial work focused on Data Intermediaries in the work package 4 DPP Data Ecosystem.

PACE-DPP  is motivated by providing guardrails and solution bricks for tackling the basic technological and regulatory challenges for a smooth instantiation of DPP-based Ecosystems. Industrial relevant applications from supply-chains in electronics and wood/pulp/paper industries provide a solid basis for use-case driven experimentation with key enabling digital technologies like Data Spaces and Digital Twins. The essential result will represent the provision of lightweight accessible DPP services for unleashing the hidden potential of innovative circular economy business models, within the context of the European Green Deal.

OwnYourData plays a critical role in the PACE-DPP project by providing essential expertise in self-sovereign identity (SSI) and data governance for the emerging Digital Product Passport (DPP) ecosystem. We contribute to the semantic annotation of data structures to ensure interoperability within the DPP data ecosystem. Additionally, OwnYourData supplies technical building blocks for decentralized identifiers (DIDs), Verifiable Credentials (VCs), and Verifiable Presentations (VPs), supporting attestations and authentication mechanisms in compliance with OpenID specifications (OID4VC, OID4VCI, OID4VP). By implementing these SSI components, OwnYourData enhances trust and security in data sharing processes, fostering a scalable and privacy-preserving digital infrastructure for product traceability.

Beyond its technical contributions, OwnYourData actively works to integrate a neutral data intermediary (https://intermediary.at) that facilitates the secure storage and controlled exchange of product-related information. The intermediary serves as key enablers in supply chain processes, ensuring that product data remains accessible and verifiable while respecting privacy and compliance requirements. In addition we engage with stakeholders and disseminates human-centric aspects of digital product passports through collaborations with initiatives such as MyData, promoting transparency and user empowerment in data ecosystems.

The OwnYourData team in the PACE-DPP project consists of experts in data governance, self-sovereign identity, and semantic technologies. Dr. Christoph Fabianek, an authority in data exchange and SSI frameworks, leads the team’s contributions to decentralized identity solutions and Verifiable Credentials. Dr. Fajar J. Ekaputra brings expertise in semantic web technologies, ensuring structured and interoperable data representation. Gulsen Guler, MSc, specializes in data literacy and human-centric solutions, supporting accessibility and usability of digital product passport implementations. DI(FH) Markus Heimhilcher provides expertise in system operations, database management, and Kubernetes maintenance, ensuring a scalable and secure infrastructure. Paul Feichtenschlager contributes skills in data modeling, statistics, and software development, strengthening the technical foundation of OwnYourData’s role in the project.

We look forward to contributing our expertise to this transformative project and collaborating with other consortium members to establish a secure, interoperable, and privacy-preserving Digital Product Passport ecosystem. Stay tuned for more updates on our journey with PACE-DPP! For more information about the project, visit DPP-Austria.at.

 

This Lighthouse Project has been made possible by financial contributions from the Austrian Federal Ministry for Climate Action, Environment, Energy, Mobility, Innovation and Technology (BMK), supported by the Austrian Research Promotion Agency (FFG) under grant number 917177, as well as from the German Federal Ministry for Economic Affairs and Climate Action (BMWK), supported by the German Research Promotion Agency (DLR-PT).

Der Beitrag Empowering Digital Product Passports: OwnYourData’s Role in Secure and Interoperable Data Ecosystems erschien zuerst auf www.ownyourdata.eu.

Tuesday, 11. February 2025

Energy Web

Unlocking the Future of Energy with Energy Web Circles

Solving energy challenges through collaboration, innovation, and industry driven solutions The energy industry is at a turning point. With decentralization on the rise, evolving regulatory demands, and an urgent push for sustainability, industry leaders must innovate together. That’s why Energy Web is excited to launch Energy Web Circles — a new initiative that brings together enterprises, regula
Solving energy challenges through collaboration, innovation, and industry driven solutions

The energy industry is at a turning point. With decentralization on the rise, evolving regulatory demands, and an urgent push for sustainability, industry leaders must innovate together. That’s why Energy Web is excited to launch Energy Web Circles — a new initiative that brings together enterprises, regulators, and technology providers to create software solutions for today’s energy challenges.

A New Model for Collaboration

Energy Web Circles are designed to cut through traditional silos by focusing on key issues faced by the energy sector. Each Circle zeroes in on a specific challenge or opportunity, ranging from decentralized identity management, zero carbon electric vehicle charging, to carbon-aware computing and advanced grid management. These groups offer structured, flexible environments where participants can create solutions, share insights, and help shape emerging industry standards.

Introducing the First Circle: Universal Energy ID

Our inaugural Circle, Universal Energy ID, sets the stage for a more secure and interoperable energy ecosystem. As energy markets become increasingly decentralized, the need for a transparent and standardized identity framework grows. The Universal Energy ID isn’t just an identity solution — it’s a trust layer for the decentralized energy economy. It enables secure, seamless, and interoperable transactions that are essential for modern energy systems.

What is the Universal Energy ID Circle?

The Universal Energy ID Circle is a collaborative initiative aimed at developing and deploying a decentralized identity and credentialing system for the energy sector. The primary goal is to create a trusted and verifiable identity infrastructure that enables seamless transactions, regulatory compliance, and secure access control across the energy ecosystem. The Circle is designed to foster the development and integration of the EnergyID framework to enable Digital Product Passports (DPPs), e.g. to manage the lifecycle of batteries. For a DPP, this means that information about a product can be stored and updated across a network of various participants (manufacturers, suppliers, distributors, consumers, regulators) without relying on a central entity, making it resistant to censorship and single points of failure. By implementing DPPs, enterprises can drive innovation in electric mobility, enable circular economy practices, and comply with evolving regulatory requirements.

What Does the Solution Look Like?

At the core of this initiative is the EnergyID DID method, a W3C-compliant Decentralized Identifier (DID) specifically tailored for energy applications. This method provides a secure, tamper-proof way to authenticate energy-related assets, organizations, and individuals. This framework empowers electric mobility providers, distributed energy resource (DER) operators, and digital product passport managers to build trust and security into their transactions, setting a new standard for the industry.

The solution comprises:

Universal EnergyID Wallet — A digital identity wallet that allows energy stakeholders (such as utilities, DER owners, and EV users) to store and manage their decentralized identifiers and verifiable credentials. Verifiable Credentials (VCs) — These digital attestations enable secure proof of identity, asset ownership, and regulatory compliance in energy transactions. Interoperability Layer — Designed to integrate with existing identity management systems and ensure compatibility with global standards such as eIDAS 2.0. Trust and Governance Framework — A decentralized governance structure ensuring that credential issuance and verification are secure, reliable, and universally accepted.

How It Works in Practice

For Electric Vehicle Charging: EV owners can use their Universal EnergyID to authenticate at any charging station, ensuring trusted, frictionless access without relying on siloed authentication systems. For Distributed Energy Resources (DERs): A solar panel or battery storage unit can have a unique, verifiable identity that allows grid operators to authenticate its participation in energy markets. For Energy Data Sovereignty: Consumers can control who has access to their energy data, ensuring privacy and security while enabling seamless energy transactions. For Regulatory Compliance: Businesses and energy providers can automatically prove adherence to compliance standards through digitally signed verifiable credentials.

Key Benefits of Universal Energy ID

Unified Identity Management: It provides a single standard for managing identities across energy, electric mobility, and beyond, eliminating fragmentation. Proven Adoption: Major enterprises and OEMs are already adopting this framework, demonstrating real-world traction. Regulatory Alignment: Built to align with standards like eIDAS and Digital Product Passports, it ensures both compliance and scalability. Open and Interoperable: The solution is integrated with leading open-source identity frameworks, including OWF and ACA-Py, making it adaptable to various use cases. Secure Communication: Worker nodes in Energy Web act as DIDComm mediators and relayers, ensuring secure, reliable communication in decentralized identity ecosystems. Automated Billing: The system supports automated and trustworthy billing for energy consumed and produced, reducing administrative friction

Join the Movement
Energy Web Circles are open to all businesses interested in advancing digitization and decarbonization in the energy sector. Whether you’re an existing Energy Web member or a new partner, your expertise is welcome. By joining a Circle, you will:

Shape Emerging Standards: Play a direct role in developing the technologies and protocols that will define the future of energy. Access Exclusive Insights: Gain early access to research, reports, and discussions that keep you at the cutting edge. Collaborate Globally: Work alongside leaders in energy, sustainability, and technology to drive innovation and change. Call to Action: Join the Universal Energy ID Circle or create your own Circle

If digital identity, verifiable credentials, and seamless interoperability in energy can create value for your business, the Universal Energy ID Circle is for you. Whether you’re an enterprise ready to integrate decentralized identity solutions or a regulator committed to enhancing compliance mechanisms, now is the time to get involved. Or do you have a compelling use case or a challenge that could benefit from industry-wide collaboration?

The future of energy is collaborative, decentralized, and digital. With Energy Web Circles, you have the opportunity to drive that future — today. To learn more about our initiative and discover how you can participate, visit Energy Web or reach out to katy.lohmann@energyweb.org

Join the Universal Energy ID Circle today by contacting: commercial@energyweb.org

Unlocking the Future of Energy with Energy Web Circles was originally published in Energy Web on Medium, where people are continuing the conversation by highlighting and responding to this story.

Monday, 10. February 2025

Digital ID for Canadians

Advancing and Evolving Digital Trust in the Finance and Regulatory Sectors

February 11, 2025 Current Landscape The finance and regulatory sectors are undergoing rapid digital transformation. While the industry pioneers new technology and moves away from…

February 11, 2025

Current Landscape

The finance and regulatory sectors are undergoing rapid digital transformation. While the industry pioneers new technology and moves away from conventional platforms, it faces rising fraud, privacy breaches, and growing consumer skepticism fueled by misinformation, disinformation, and challenges verifying information in an AI-driven world.

When Finance Canada’s Electronic Task Force for Payments System Review created DIACC in 2012, its goal was to unite public and private sector partners to develop a secure digital ecosystem. This goal remains, but the need to support a future-focused ecosystem underpinned by verification and authentication is more urgent than ever.

By prioritizing digital trust, Canada can secure its financial systems and enhance competitiveness in the global economy. Interoperable frameworks like the DIACC Pan-Canadian Trust Framework™ (PCTF) ensure systems remain resilient, adaptable, and trusted.

Advancing and Evolving Digital Trust in the Finance and Regulatory Sectors Strengthening Economic Competitiveness and Growth

Preventing fraud and building consumer trust are economic imperatives. Fraud costs taxpayers and businesses millions annually, and to ensure Canada remains competitive, we must adopt robust digital trust solutions to reduce fraud’s financial and operational impact. Adopting the right measures will also help Canada attract international investment, foster innovation in financial services, drive growth, and create jobs.

Enhancing Trust Through the DIACC PCTF to Complement KYC and AML

DIACC encourages its members in finance and payments to adopt the PCTF as a tool to:

Complement Know Your Customer (KYC) and Anti-Money Laundering (AML) regulations by enhancing remote customer onboarding; Authenticate identities with trusted verification systems to reduce fraud in online transactions, digital wallets, and mobile banking; and Improve regulatory reporting efficiencies by using verifiable credentials for entity identification.

The PCTF is a critical resource for advancing interoperability, which supports trusted cross-institutional collaboration and strengthens the financial ecosystem.

Fostering Consumer Autonomy and Addressing Misinformation

Public skepticism fueled by misinformation and disinformation creates friction in adopting digital financial platforms. To tackle this, we can do the following:

Respect consumer autonomy: DIACC advocates for voluntary digital solutions that preserve traditional verification methods while offering enhanced, secure options. Promote digital literacy: Educational campaigns can strengthen consumer confidence by helping Canadians identify trustworthy institutions. Use “trust signals”: Institutions should adopt trust signals, such as verified logos or PCTF certification badges, to reassure customers of secure practices.

These efforts respect the diverse preferences of Canadians while promoting confidence in digital services.

Proactive Fraud Prevention and Responsible Innovation

Fraud prevention is not just about mitigation—it’s about staying ahead. To reduce incidents such as mortgage fraud and title insurance scams, we must:

Adopt an approach that complements existing client identification methods and includes verifiable credentials, mobile driver licenses and trust registries; financial institutions should integrate DIACC PCTF-aligned solutions to strengthen borrower identification and secure digital proof of ownership. Leverage responsible AI for risk detection — fraud prevention measures must include responsible AI and real-time analytics to detect anomalies, mitigate risks, and reduce operational costs.
Open Banking and the Future of Finance

The government’s commitment to open banking presents opportunities for financial innovation and introduces risks without secure digital trust frameworks. DIACC’s members are encouraged to:

Use the PCTF to establish interoperable risk management practices, ensuring trusted data sharing across institutions. Mitigate risks in consumer-permissioned data sharing by implementing robust verification systems that prioritize data privacy and transparency. Align with global open banking standards to ensure Canadian institutions remain competitive and attract international investment.

Open banking offers an opportunity to empower Canadian consumers and small businesses while strengthening the role of domestic innovation on the global stage.

Best Practices and the Way Forward Enhance Digital Trust with Emerging Technologies

Canada must adopt emerging technologies to stay globally competitive. Financial institutions can use verifiable data and responsible AI to enhance fraud detection, wallets, credentialing, and compliance.

Collaborate for Global Competitiveness

Collaboration with international regulators and organizations is essential for aligning Canadian frameworks with global norms, enabling secure cross-border transactions and strengthening Canada’s financial ecosystem.

Educate and Empower Through Advocacy and Metrics

DIACC is committed to educating Canadians on digital literacy and trust while addressing inclusivity challenges, including:

Hosting sector-specific workshops and certifications to promote best practices in digital trust. Amplifying simple, real-world use cases to demonstrate the benefits of digital trust solutions, such as fraud reduction and consumer empowerment. Advocating for enhanced regulations prioritizing secure transactions and privacy while preserving consumer choice. Publishing annual metrics on fraud prevention and consumer trust improvements to ensure transparency and accountability. Conclusion

The finance and regulatory sectors urgently need services with interoperable assurance and trust to meet the growing demands of digital transformation.

By working together, we can:

Educate citizens to strengthen digital literacy while addressing inclusivity challenges. Amplify the role of governments and domestic and world banks in strengthening trust. Determine how governments and organizations can best leverage the DIACC Pan-Canadian Trust Framework to complement KYC and AML regulations, build assurance, and mitigate risks for the benefit of all Canadians.

Through education, collaboration, and the strategic use of trusted solutions, we can solidify Canada’s position as a global leader in the digital economy while safeguarding trust, privacy, and economic opportunity for all Canadians.

Download available here.

DIACC-Position-Digital-Trust-For-Finance-and-Regulatory-Sectors-ENG_v1.1

DIF Blog

DIF Workshop Highlights Progress on Privacy-Preserving Age Verification Standards

DIF recently hosted a special session of its Credential Schemas workshop focused on developing privacy-preserving solutions for age verification. Led by Otto Mora, Standards Architect at Privado ID, and Valerio Camiani, Software Engineer at Crossmint, the session explored the growing need for standardized age verification. The workshop addressed the increasing

DIF recently hosted a special session of its Credential Schemas workshop focused on developing privacy-preserving solutions for age verification. Led by Otto Mora, Standards Architect at Privado ID, and Valerio Camiani, Software Engineer at Crossmint, the session explored the growing need for standardized age verification.

The workshop addressed the increasing regulatory requirements for age verification across the US, EU, and other regions. Rather than focusing solely on traditional document-based verification methods, the group discussed innovative approaches including AI-based age estimation and voice recognition technologies, while maintaining strong privacy protections.

A key focus of the discussion was the development of standardized proof of age credential schemas that would enable efficient interoperability between different systems and organizations. Schemas would need to handle a variety of elements like age ranges, verification methods used, and confidence levels aligned with ISO standards. 

Getting Involved with DIF

For those interested in contributing to this important work:

The working group meets every other week and welcomes new participants The group’s specifications are publicly available on GitHub, where interested parties can submit commits and issues

To learn more about DIF and its work on decentralized identity standards, visit https://identity.foundation or reach out to membership@identity.foundation.


We Are Open co-op

Understanding Predominant Narratives in AI

Over the past months, WAO has been working on a report called Harnessing AI for Environmental Justice for Friends of the Earth. Funded by Mozilla, the report is a deep dive into the landscape of Artificial Intelligence (AI) and what Big Tech’s newest hype means for activists and campaigners. Read or download the full report here. Pas(t)imes in the Computer Lab by Hanna Barakat & Cambridg

Over the past months, WAO has been working on a report called Harnessing AI for Environmental Justice for Friends of the Earth. Funded by Mozilla, the report is a deep dive into the landscape of Artificial Intelligence (AI) and what Big Tech’s newest hype means for activists and campaigners.

Read or download the full report here.

Pas(t)imes in the Computer Lab by Hanna Barakat & Cambridge Diversity Fund

In this series, we pull out salient points from the report, providing a briefer look into social and power dynamics we considered as we tried to give activists insight into how they might utilise AI in their work. The report, and thus many of these excerpts, is cc-by Friends of the Earth.

In Part 1, we’ll look at the predominant narratives around AI that are influencing how the industry, and we, as activists, are thinking about AI. Then, in Part 2, we’ll look at the systemic complexities of our technology use in relation to the climate crisis and other justice driven movements. Part 3 will look into the seven principles we developed as part of the Friends of the Earth project. Finally, Part 4 explores how to be an activist in the world of AI given predominant narratives and system complexities.

Understanding Predominant Narratives in AI Systemic complexities with AI Starting principles for the ethical use of AI How to be an Activist in a World of AI

Ready? Let’s get started!

The polarising narratives around AI

The rapid advancement of Generative AI has led to polarising views of AI in general, creating a binary framing that limits nuanced understanding and which hinders informed decision making. AI is either leading us to a dystopian future where current inequalities are exacerbated to the point of societal collapse, or it is seen as a utopian solution that will fix our problems if we just keep going. (Joseph Rowntree Foundation, 2024).

The techno-solutionism inherent in the utopian vision insists that technology alone will solve the complex social and environmental problems faced by our societies. This obscures reality on the ground. It ignores the agency and power of our social and political solutions. It obfuscates the hidden costs of AI technology, both environmental and social, and the reality of the power dynamics at play in our societies. Market forces and Big Tech companies often control this narrative, attempting to wriggle out of their responsibility to create technology that benefits all of us, rather than a rich few. (Coldicutt, 2024).

On the other hand, the dystopian view of AI ignores the immense power that sophisticated technologies have to change the way our world works. It dismisses our own agency and engagement. After all, it was only thirty-five years ago that the World Wide Web was invented, a project that was built as an open contribution to society as a whole. Despite global issues and challenges, we would argue that the Web has changed the world for the better: social movements benefit from global amplification; activists focused on local issues can receive support or engagement from anywhere; we can build collective power as we find each other through the Web.

The dystopian story insists that integrating AI into our societies will accelerate planetary destruction and lead to further societal divides, as Big Tech abuses communities and the planet in its never-ending quest for more money and power. But technologies can be designed for both good and bad purposes, and have both positive and negative outcomes.

Blanket narratives about the use of AI are not helpful in the current socio-political climate. It is important to understand that there are different kinds of AI, and to form nuanced opinions and policies based on context. (QA, 2024). Highlighting the differences between two predominant types of AI is illustrative:

Predictive AI is aiding climate science research and helping to optimise energy efficiency. It is being used to analyse vast amounts of climate data and to create early warning systems for weather events or natural disasters. (Climate One, 2024). Predictive AI can be used to help analyse data on public sentiment, demographics, and online behaviour to make campaigns more effective. Generative AI is a type of artificial intelligence that can create “original” content, such as text, images, audio, video, or code, in response to user prompts or requests. (Stryker & Scapicchio, 2024). It is based not only on vast tranches of training data, but requires enormous amounts of computational power. In addition, each request made to such a system requires even more energy to generate an answer to the request. It is, indeed, much more resource intensive.

In our report, we focus on the use of generative AI within environmental justice and digital rights activist communities. There are people in these communities who believe that there can be no “ethical” use of generative AI due to the reality of who holds the power in our current approach to AI. We certainly respect and understand this perspective, acknowledging that resistance and refusal are important pillars in activism, and wish only to augment that perspective with one that can direct our collective agency and power towards change.

In doing so, we aim to create a new story for AI — somewhere in between dystopia and utopia, that empowers us all to live our values while utilising AI for good.

These excerpts have been edited in the final report and everything is CC-BY Friends of the Earth and We Are Open Co-op. Read or download the full report here.

Technology has changed so much about our world. Now, AI is changing things again and at hyperspeed. We work to understand its potential implications for our communities, learners and programmes. Do you need help with AI Literacies, strategy, or storytelling? Get in touch!

References:

Joseph Rowntree Foundation (2024) AI and the power of narratives. Available at: https://www.jrf.org.uk/ai-for-public-good/ai-and-the-power-of-narratives (Accessed: 24 October 2024).

Coldicutt, R. (2024). Let’s make AI work for 8 billion people not 8 billionaires. Scottish AI Summit. Available at: https://www.scottishaisummit.com/rachel-coldicutt (Accessed 9 December 2024).

QA (2024). Understanding the different types of AI. Available at: https://www.qa.com/resources/blog/types-of-ai-explained/ (Accessed: 22 November 2024).

Climate One (2024) ‘Artificial Intelligence, Real Climate Impacts’ [podcast], 19th April. Available at: https://www.climateone.org/audio/artificial-intelligence-real-climate-impacts (Accessed: 24 October 2024).

Stryker, C. & Scapicchio, M. (2024). What is generative AI? IBM Explainers. Available at: https://www.ibm.com/topics/generative-ai (Accessed: 22 November 2024).

Understanding Predominant Narratives in AI was originally published in We Are Open Co-op on Medium, where people are continuing the conversation by highlighting and responding to this story.

Sunday, 09. February 2025

Project VRM

Blocking Tracking ≠ Blocking Ads

I started reading BoiongBoing when it was a ‘zine back in the last millennium. I stopped when I began hitting this: In fact I don’t block ads. I block tracking, specifically with Privacy Badger, from the EFF. But BoingBoing, like countless other websites, confuses tracking protection with ad blocking. This is because they are in […]

I started reading BoiongBoing when it was a ‘zine back in the last millennium. I stopped when I began hitting this:

In fact I don’t block ads. I block tracking, specifically with Privacy Badger, from the EFF.

But BoingBoing, like countless other websites, confuses tracking protection with ad blocking. This is because they are in the surveillance-aimed advertising business, aka adtech.

It’s essential to know that adtech is descended from the junk mail business, euphemistically called “direct response marketing.” As I put it in Separating Advertising’s Wheat and Chaff,

Remember the movie “Invasion of the Body Snatchers?” (Or the remake by the same name?) Same thing here. Madison Avenue fell asleep, direct response marketing ate its brain, and it woke up as an alien replica of itself.

As surveillance-based publications go, BoingBoing is especially bad. Here is a PageXray of BoingBoing.net:

And here is a PageXray of the same page’s URL, to which  tracking cruft from the email I opened was appended:


Look at that: 461 adserver requests, 426 tracking requests, and 199 other requests, which BoingBoing is glad to provide. (Pro tip: always strip tracking cruft from URLs that feature a “?” plus lots of alphanumeric jive after the final / of the URL itself. Take out the “?” and everything after it. )

Here is a close-up of one small part of that vast spread of routes down which data about you flows:

Some sites, such as FlightAware, interrupt your experience with a notice that kindly features an X in a corner, so you can make it go away:

Which I do.

But BoingBoing doesn’t. Its policy is “Subscribe or pay with lost privacy.”  So I go away.

Other sites use cookie notices that give you options such as these from a Disney company (I forget which):

Nice that you can Reject All. Which I do.

This one from imgur let’s you “manage” your “options.” Those, if they are kept anywhere (you can’t tell), are in some place you can’t reach or use to see what your setting was, or if they haven’t violated your privacy:

This one at Claude defaults to no tracking for marketing purposes (analytics and marketing switches are set to Off):

TED here also lets you Accept All or Reject All:

I’ve noticed that Reject All tends to be a much more prominent option lately. This makes me think a lot of these sites should be ready for IEEE P7012, nicknamed MyTerms, which we expect to become a working standard sometime this year. (I chair the working group.) I believe MyTerms is the most important standard in development today because it gets rid of this shit—at least for sites that respect the Reject All signal, plus the millions (perhaps billions?) of sites that don’t participate in the surveillance economy.

With MyTerms, sites and services agree to your terms—not the other way around. And it’s a contract. Also, both sides record the agreement, so either can audit compliance later.

Your agent (typically your browser, through an extension or a header) will choose to proffer one of a small list of contractual agreements maintained by a disinterested nonprofit. Customer Commons was created for this purpose (as a spin-off of ProjectVRM). It will be for your terms what Creative Commons is for your copyright licenses.

Customer Commons also welcomes help standing up the system—and, of course, getting it funded. If you’re interested in working on either or both, talk to me. I’m first name at last name dot com. Thanks!

 

Friday, 07. February 2025

Velocity Network

Dr. Meagan Treadway joins Velocity’s board

We're delighted that National Student Clearinghouse's Chris Goodson has been voted onto the Velocity Network Foundation Board of Directors. The post Dr. Meagan Treadway joins Velocity’s board appeared first on Velocity.

Wednesday, 05. February 2025

DIF Blog

🚀 Celebrating Innovation: Winners of the DIF Hackathon 2024

The DIF Hackathon 2024 brought together builders from around the world to tackle some of the biggest challenges in decentralized identity. Across multiple tracks—including education and workforce solutions, reusable identity, and privacy-preserving authentication—participants developed creative applications that redefine how digital identity is used and trusted in

The DIF Hackathon 2024 brought together builders from around the world to tackle some of the biggest challenges in decentralized identity. Across multiple tracks—including education and workforce solutions, reusable identity, and privacy-preserving authentication—participants developed creative applications that redefine how digital identity is used and trusted in the real world.

One of the standout challenges? The ZKP in SSI track, sponsored by Privacy & Scaling Explorations (PSE) from the Ethereum Foundation. Teams explored how to innovate with Zero-Knowledge Proofs (ZKPs), Multi-Party Computation (MPC), and Fully Homomorphic Encryption (FHE)—leveraging programmable cryptography to enhance privacy, security, and interoperability in SSI systems.

Beyond ZKPs, this year’s hackathon saw verifiable credentials powering next-gen job boards, seamless hotel check-ins, streamlined digital identity solutions for expats, and groundbreaking innovations in decentralized file storage.

Check out the inspiring discussion featuring the hackathon winners which took place on Dec 19th on Spaces: https://x.com/DecentralizedID/status/1869819527170797972

Let’s dive into the full list of hackathon winners and the impactful projects that emerged from DIF Hackathon 2024. 👇

Future of Education and the Workforce Track

Sponsored by Jobs for the Future (JFF) and the Digital Credentials Consortium (DCC)

1st Place: Challenge 1 

VCV 

VCV revolutionizes the CV, It allows for you to create verifiable and clean CVs where an employer can upload them and instantly verify them and view their individual verifiable credentials which combine to make the CV.

VCV VCV revolutionizes the CV, making it verifiable and creating verifiable experiences. DevpostMerul Dhiman

2nd Place: Challenge 1 

VeriDID Futures Credential Job Board

A job board that matches job seekers and potential employers using verifiable learning and employment record credentials.

https://devpost.com/software/veridid-futures

1st Place: Challenge 2b

Crediview Job Board

Instantly view and verify credentials with our sleek browser extension. Drag, drop, or paste – our smart detector does the rest. Streamline workflow and boost confidence in credential verification.

CrediView Instantly view and verify credentials with our sleek browser extension. Drag, drop, or paste – our smart detector does the rest. Streamline workflow and boost confidence in credential verification. DevpostBolaji Mubarak ZKP in SSI Track Sponsored by the Ethereum Foundation Privacy Scaling Explorations (PSE)

1st Place

Decentralised Credentials Issuer

Decentralising credentials issuance with Multiparty computation based threshold signatures

Decentralised Credentials Issuer Decentralising credentials issuance with Multiparty computation based threshold signatures Devpostanishsapkota Sapkota

2nd Place

VC Notary

Converting web service account data into verifiable credentials using TLSNotary

https://difhackathon2024.devpost.com/submissions/570986-vc-notary

3rd Place

ZK Firma Digital 

The project aims to create a zero-knowledge proof infrastructure solution for enhancing Costa Rica's digital identity system. It's a protocol that allows you  to prove identity in a privacy preserving way.

https://difhackathon2024.devpost.com/submissions/559470-zk-firma-digital

Reusable Identity Track Pinata 

Pinata Best Overall/ Pinata Proof of Personhood Credentials Winner

Vouch This Art

Vouch This Art is a Chrome extension that allows users to 'vouch' for images on the web by liking them or leaving messages, enabling interaction with images hosted anywhere online.

Vouch This Art Vouch This Art is a web chrome extension that “Vouch” images, allowing users to interact with images hosted anywhere on the web, whether by simply liking them or leaving messages. DevpostTotoro Gendut

Pinata Verifiable File Storage Winner

PinaVault

Access private IPFS files securely with PinaVault! Leverage Pinata's FilesAPI and W3C credentials to manage, share, and access files within organizations Credential-based access ensures user privacy.

https://difhackathon2024.devpost.com/submissions/574219-pinavault/

Pinata Identity-Based Access Controls For Private Files Winner

Expatriate

Streamlining the complex process of settling as an expat in Amsterdam through DIF verifiable credentials and wallets.

https://difhackathon2024.devpost.com/submissions/575933-expatriate

Pinata Honorable Mentions 

ChainVid

ChainVid is a decentralized platfrom to store and share videos online.

https://difhackathon2024.devpost.com/submissions/570136-chainvid

LookMate – Your virtual fashion companion.

Try Before You Buy: Revolutionizing Online Shopping with Realistic Virtual Fitting Rooms.

https://difhackathon2024.devpost.com/submissions/577364-lookmate-your-virtual-fashion-companion

PatentBots.AI - Securing Inventor Rights w/ Pinata Identity

PatentBots.AI decentralized system designed to help SME's to fully capitalize on, defend, commercialize and monetize their parents using the power memes and credentialed of persona identities (AIs).

https://difhackathon2024.devpost.com/submissions/575293-patentbots-ai-securing-inventor-rights-w-pinata-identity

Truvity

1st Place: Challenge 1 & Challenge 2

Miko's Journey

A comprehensive digital identity solution that transforms the complex expat documentation process into a streamlined, secure journey.

Devpost

2nd Place: Challenge 1 

Expatriate

Streamlining the complex process of settling as an expat in Amsterdam through DIF verifiable credentials and wallets.

https://difhackathon2024.devpost.com/submissions/575933-expatriate

3rd Place: Challenge 1 

CredEase

Digital Identity Wallet with a guided to-do list that helps expats collect, link, and submit VCs for tasks like employment, visa application, municipal registration, bank account opening, and housing.

https://difhackathon2024.devpost.com/submissions/577563-credease/judging

ArcBlock

1st Place

BM9000

A blockchain-powered beat maker by Bass Money Productions. Log in, get instant sounds, make patterns, build songs, and upload your own sounds—securely stored in your DID space.

BM9000 BM-9000: A blockchain-powered beat maker by Bass Money Productions. Log in, get instant sounds, make patterns, build songs, and upload your own sounds—securely stored in your DID space. Devpost12 inch

2nd Place

didmail

Decentralized encrypted mail Based on ArcBlock DID/NFT technology

https://devpost.com/software/didmail

3rd Place

Todai

Enables seamless registration and authentication without usernames, passwords, or third-party oversight, using blockchain and advanced digital identity solutions.

https://devpost.com/software/todai

ArcBlock Honorable Mentions

Titan Care

TitanCare leverages multi-agent AI to enhance data security, privacy, and user control, addressing real-world health sector challenges with innovative solutions.

https://devpost.com/software/titancare

FlexiLeave

This solution uses ArcBlock's Blocklet SDK to manage employee portable leave with DIDs and VCs via DID Wallet, enabling identity creation, credential issuance, verification, and file integration with Pinata.

https://difhackathon2024.devpost.com/submissions/577693-flexileave

TBD (Block)

1st Place

DIF TBD KCC

Leverages Decentralized Identifiers (DIDs) and Verifiable Credentials (VCs) to create a secure and privacy-preserving solution for Known Customer Credentials (KCC).

DIF TBD KCC Level Up Digital Identity with TBD DWN DevpostAditya Birangal

2nd Place

KCC TBD Hackathon

https://difhackathon2024.devpost.com/submissions/577640-kcc-tbd-hackathon

https://difhackathon2024.devpost.com/submissions/577640-kcc-tbd-hackathon

3rd Place

kcc-tbdex

A simple backend application written in javascript that records VC JWT in Alice's DWN and gets the record Id.

https://difhackathon2024.devpost.com/submissions/567335-kcc-tbdex

Ontology 

1st Place

CrediLink connect

CrediLink Connect, a browser extension built with indy-bex-connector, uses Verifiable Credentials and DIDs to combat LinkedIn scams by enabling wallet creation, VC management, and proof verification.

CrediLink connect Credilink connect is a browser extension for receiving verifying and proving verifiable credentials DevpostEren Akyıldız

2nd Place

Enhanced Privacy VC Wallet: Implementing Issuer Hiding

A VC Wallet app enabling users to present VPs with enhanced privacy through Issuer Hiding, concealing the VC issuer's identity.

https://difhackathon2024.devpost.com/submissions/577411-enhanced-privacy-vc-wallet-implementing-issuer-hiding

3rd Place

Blockchain Whatsup Application

Seamlessly secure, blockchain-based chat for the decentralized future.

https://difhackathon2024.devpost.com/submissions/577550-blockchain-whatsup-application

Crossmint

1st Place

Idenfy

IDENFY is a reusable identity solution leveraging biometrics and verifiable digital credentials to streamline secure eKYC across sectors with simplicity and speed.

Idenfy Empowering Trust with Reusable Identity and Seamless eKYC. DevpostSai Ranjit Tummalapalli Anonyome Labs

1st Place

Decentralized PHC Credentials in the Future AI Era.

Providing a way for a person to prove their human identity in the future using PHC personhood credentials with VC, DID, and ZKP, supported by powerful infrastructure.

Decentralized PHC Credentials in the Future AI Era. Providing a way for a person to prove their human identity in the future using PHC personhood credentials with VC, DID, and ZKP, supported by powerful infrastructure. DevpostData Fusion

2nd Place

VAI

VAI is an AI chatbot that brings transparency and accountability to large language model interactions by issuing VCs that show the provenance of the inputs, outputs, and the specific model being used.

https://difhackathon2024.devpost.com/submissions/575691-vai

3rd Place

VerifiEd

VerifiEd uses self-sovereign identity (SSI) to provide an interactive, module-based learning platform where users earn Verifiable Credentials (VCs) while mastering SSI principles.

https://difhackathon2024.devpost.com/submissions/562649-verified

Cheqd

1st Place

VAI
VAI VAI is an AI chatbot that brings transparency and accountability to large language model interactions by issuing VCs that show the provenance of the inputs, outputs, and the specific model being used. DevpostBrian Richter Extrimian 

1st Place

The Grand Aviary Hotel

The Grand Aviary Hotel offers a seamless stay with digital room access via Verifiable Credentials. Guests unlock rooms with a secure mobile credential, enhancing convenience, and security.

https://devpost.com/software/the-grand-aviary-hotel

NetSys - Hospitality and Travel

1st Place 

Journease

Elevate travel with leading digital travel companion, unlocking full potential of portable identity and data, by pulling diverse capabilities together to deliver end-2-end seamless travel experiences

Journease Elevate travel with leading digital travel companion, unlocking full potential of portable identity and data, by pulling diverse capabilities together to deliver end-2-end seamless travel experiences DevpostAndrea Sanchez Fresneda

2nd Place

Personalised Dining Offers Using AI & Travel Preferences

Imagine booking a hotel with on-site restaurant & receiving a personalised dining offer based on what you eat. If you want a burger or vegetarian, you got it! Better for consumers, better for venues

https://difhackathon2024.devpost.com/submissions/577306-personalised-dining-offers-using-ai-travel-preferences

3rd Place

OwnID

IOwnID is an offline-first app leveraging decentralized identity (DID) and Bluetooth to enable secure, private, and internet-free digital identity management and communication.

https://difhackathon2024.devpost.com/submissions/574224-ownid


Oasis Open

Coalition for Secure AI and Universal Business Language V2.4 Open Standard Win OASIS Open Cup Awards

Boston, MA, USA, 5 February 2025 — OASIS Open, the international open source and standards consortium, announced the winners of the 2024 Open Cup, which recognizes exceptional advancements within the OASIS technical community. The Outstanding New Initiative award was presented to the Coalition for Secure AI (CoSAI), an open source ecosystem dedicated to sharing best […] The post Coalition for Se

David Lemire, William Parducci, and Omar Santos Honored as OASIS Distinguished Contributors

Boston, MA, USA, 5 February 2025 — OASIS Open, the international open source and standards consortium, announced the winners of the 2024 Open Cup, which recognizes exceptional advancements within the OASIS technical community. The Outstanding New Initiative award was presented to the Coalition for Secure AI (CoSAI), an open source ecosystem dedicated to sharing best practices for secure AI deployment and collaborating on AI security research and product development. The Outstanding Approved Standard award was presented to Universal Business Language Version 2.4, a widely used open standard for global business transactions offering enhanced interoperability and efficiency in supply chain and digital trade. Also announced were the 2024 OASIS Distinguished Contributors: David Lemire, William Parducci, and Omar Santos, recognized for their significant impact on the open source and open standards communities. 

Open Cup Recipients

CoSAI, an OASIS Open Project, won the 2024 Outstanding New Initiative for its pioneering efforts in fostering a collaborative ecosystem dedicated to secure AI. Launched at the Aspen Security Forum in July 2024, CoSAI unites diverse stakeholders across companies, academia, and other fields to develop and share holistic approaches, including best practices, tools and methodologies for secure AI development and deployment. CoSAI’s Technical Steering Committee (TSC) oversees three Workstreams focused on AI security best practices, governance, and frameworks: software supply chain security for AI systems, preparing defenders for a changing cybersecurity landscape, and AI risk governance. Recently, CoSAI’s Project Governing Board established a fourth Workstream called Secure Design Patterns for Agentic Systems. The Outstanding New Initiative category included finalists OASIS Open Supplychain Information Modeling (OSIM) TC and the Space Automated Threat Intelligence Sharing (SATIS) TC.

UBL V2.4 received the Outstanding Approved Standard award for its role in transforming global business transactions. As the leading interchange format for business documents, UBL works seamlessly with frameworks like ISO/IEC 15000 (ebXML), extending the benefits of Electronic Data Interchange (EDI) systems to businesses worldwide. Version 2.4 maintains backward compatibility with earlier versions while introducing new business document types, now totaling 93. The European Union has recognized UBL’s significance by declaring it officially eligible for referencing in tenders from public administrations. UBL V2.4 was chosen as the winner in the Outstanding Approved Standard category that included finalist DocBook Schema Version 5.2.

Distinguished Contributors

Each year, OASIS names Distinguished Contributors to honor members who consistently exceed expectations, demonstrating exceptional dedication, commitment, and a collaborative spirit that enriches the OASIS community. This year’s recipients, David Lemire, William Parducci, and Omar Santos, exemplify the values of innovation and excellence in advancing our open standards and open source projects.

David Lemire has worked as a cybersecurity systems engineer since 1986 across disciplines that include electronic key management, access control, public key infrastructure, Internet of Things, and standards. He has supported development of the Open Command and Control (OpenC2) language throughout its development by the OASIS OpenC2 TC, serves as the TC Secretary, and is editor for multiple OpenC2 specifications and one committee note. He is also a member of the Collaborative Automated Course of Action Operations (CACAO) for Cyber Security TC and a contributor to several sub-projects under the Open Cybersecurity Alliance (OCA) Open Project.

“I am surprised and honored to be recognized as an OASIS Distinguished Contributor. It has been an interesting and enjoyable journey contributing to the development of OpenC2 and related work at OASIS,” said Lemire. “The strong collaborative community that OASIS fosters has given me the opportunity to engage with many talented contributors and build my knowledge of the world of cybersecurity standards.”

William Parducci has spent over two decades driving innovation in technology, working with talented teams and visionary partners. For more than 10 years, he served as Co-Chair of the OASIS XACML TC, collaborating with experts to advance open standards in security. After re-entering the startup arena, he now focuses on AI-based semantic analysis for data research. His past roles include contributing to Toyota NA’s Enterprise Architecture team and leading the development of ad measurement and semantic analysis tools as CTO/CPO at Ace Metrix. Throughout his career, William has championed the early adoption of open-source solutions and standards, reflecting his commitment to widespread collaboration in the tech community.

“I’ve dedicated much of my career to fostering computational technologies that embrace openness, aiming to make them accessible to all,” said Parducci. “Open standards enable systems to interoperate seamlessly, driving innovation across industries. The synergy between open-source tools like Linux and proprietary systems like Windows exemplifies how openness transforms possibilities into world-changing solutions. I am honored to advocate for the collaborative power of open standards and their role in shaping the future of technology.”

Omar Santos is a Distinguished Engineer at Cisco, renowned for his pioneering work in artificial intelligence security, cybersecurity research, ethical hacking, incident response, and vulnerability disclosure. At OASIS, Omar holds several key leadership roles, including Board Member, Co-Chair of the Coalition for Secure AI (CoSAI) Open Project, Chair of the Common Security Advisory Framework (CSAF) Technical Committee (TC), and Co-Chair of the OpenEoX TC. His leadership extends beyond OASIS, with a co-chair position at the Forum of Incident Response and Security Teams (FIRST) PSIRT Special Interest Group and co-founder and influential leader of the DEF CON Red Team Village, demonstrating his commitment to advancing cybersecurity education and fostering community collaboration. An accomplished author and educator, Omar has published over 20 books, created more than 20 video courses, and contributed over 40 academic research papers. His expertise is further recognized through numerous patents granted in the field of cybersecurity.

Santos said, “I am deeply honored and humbled by this recognition. My heartfelt thanks go out to everyone on the team – it has been a true privilege to collaborate with such knowledgeable and dedicated industry peers. Contributing to this work is incredibly rewarding, and I am committed to helping address the technological challenges of 2025 and beyond. I look forward to continuing to support OASIS and our community as we advance open standards and foster collaboration to tackle some of the most pressing issues in cybersecurity.”

OASIS congratulates this year’s winners and nominees and thanks them for generously sharing their time and expertise to advance our mission.

The post Coalition for Secure AI and Universal Business Language V2.4 Open Standard Win OASIS Open Cup Awards appeared first on OASIS Open.


Ceramic Network

We’re joining Textile to create the Intelligence Layer for AI Agents

We have some big news to share! Our parent company, 3Box Labs, has merged with Textile. With this, Ceramic is joining the Textile family alongside their other decentralized data solutions, Tableland DB and Basin Network. Since Ceramic’s inception, the crypto industry has changed dramatically, and with it, the

We have some big news to share! Our parent company, 3Box Labs, has merged with Textile. With this, Ceramic is joining the Textile family alongside their other decentralized data solutions, Tableland DB and Basin Network.

Since Ceramic’s inception, the crypto industry has changed dramatically, and with it, the types of applications developers are building with decentralized and composable data. As AI agents reshape our digital landscape and become the primary producers and consumers of data, they bring needs very familiar to the Ceramic community: decentralized storage for knowledge and memory, open composability for sharing between agents, and streaming for real-time publishing and subscriptions.

Going forward, we have a vision for Ceramic as a foundational component of something much bigger than itself: an open intelligence network where AI agents can autonomously buy and sell intelligence from each other on-demand. Agents can plug-in to supercharge their own knowledge and capabilities, delegate tasks to agents who specialize in that skill, or publish and monetize their own expertise — all onchain. For this new network, Ceramic will play a vital role in empowering agent-to-agent communication and knowledge streaming alongside other storage technologies built by Textile.

Ceramic will continue operating with no disruption to current development or customers, but now you’ll have the added benefit of being connected to a whole new network of agents willing and able to pay for your datasets.

Read our announcement on X for more details and be sure to follow @textileio for future updates.

1/10: Big News! Our parent company @3BoxLabs has merged with @textileio. With this, Ceramic is becoming part of the Textile family.

Ceramic will continue operating, but now as part of something much bigger: An Intelligence Network for AI Agents 🤖https://t.co/boCvwjwnIF

— Ceramic (@ceramicnetwork) February 5, 2025

Thank you for being a part of our journey. Much more to come.


Next Level Supply Chain Podcast with GS1

Don’t Let Empty Shelves Be Your Store’s Silent Killer

Customers have endless options at their fingertips. If they can’t find what they need in your store, they’ll simply go elsewhere to shop. But this isn’t just a retailer’s problem, it’s a challenge that impacts the entire supply chain. In this episode, industry expert Mike Graen joins hosts Reid Jackson and Liz Sertl to break down the critical importance of on-shelf availability. Mike shares why

Customers have endless options at their fingertips. If they can’t find what they need in your store, they’ll simply go elsewhere to shop. But this isn’t just a retailer’s problem, it’s a challenge that impacts the entire supply chain.

In this episode, industry expert Mike Graen joins hosts Reid Jackson and Liz Sertl to break down the critical importance of on-shelf availability. Mike shares why ensuring products are accessible to customers is more essential than ever. He also shares how RFID, AI-driven algorithms, and robotics are transforming inventory accuracy, alongside actionable strategies to keep shelves stocked and customers satisfied. 

 

In this episode, you’ll learn:

The difference between "in-stock" and "on-shelf availability"

How technology is solving inventory challenges and boosting sales

The evolving nature of customer loyalty and how to keep up

 

Jump into the conversation:

(00:00) Introducing Next Level Supply Chain

(03:13) The importance of on-shelf product availability (OSA)

(05:21) Why retailers are losing customers

(07:41) Challenges with inventory management

(14:15) The different ways customers shop 

(18:52) Getting serious about measuring OSA 

(22:47) Computer vision and RFID to track OSA

(28:35) GS1 standards in the supply chain

(32:52) Evolving together with technology

(35:13) Mike’s favorite tech

 

Connect with GS1 US:

Our website - www.gs1us.org

GS1 US on LinkedIn

 

Connect with the guest:

Mike Graen on LinkedIn

Tuesday, 04. February 2025

GS1

GS1 DataMatrix: The next generation barcode driving Żabka’s retail innovation

GS1 DataMatrix: The next generation barcode driving Żabka’s retail innovation The implementation of GS1 DataMatrix barcodes on Nowalijka’s products for the Polish retailer Żabka’s has transformed inventory management, improved product tracking and enhanced customer safety. With streamlined processes and better data accuracy, Żabka can ensure f
GS1 DataMatrix: The next generation barcode driving Żabka’s retail innovation

The implementation of GS1 DataMatrix barcodes on Nowalijka’s products for the Polish retailer Żabka’s has transformed inventory management, improved product tracking and enhanced customer safety. With streamlined processes and better data accuracy, Żabka can ensure fresh, high-quality products and innovative, sustainable retail practices.

case_study_gs1-poland-zabka-nowalijka.pdf

Velocity Network

Dr. Deborah Everhart joins Velocity’s board

We're delighted that National Student Clearinghouse's Chris Goodson has been voted onto the Velocity Network Foundation Board of Directors. The post Dr. Deborah Everhart joins Velocity’s board appeared first on Velocity.

Transforming Healthcare Credentialing: An Update from Velocity Network Foundation

The post Transforming Healthcare Credentialing: An Update from Velocity Network Foundation appeared first on Velocity.

Digital ID for Canadians

Request for Comment & IPR Review: PCTF Law Society Profile and PCTF Glossary

This review period has closed as of March 20, 2025. Notice of Intent: DIACC is collaborating to develop and publish the Law Society Profile of…

This review period has closed as of March 20, 2025.

Notice of Intent: DIACC is collaborating to develop and publish the Law Society Profile of the Pan-Canadian Trust Framework (PCTF) to set a baseline interoperability of identity services and solutions used in the legal sector. During this public review period, DIACC is looking for community feedback to ensure that the conformance criteria is clear and auditable.

Accompanying this review period is the PCTF Glossary that has been updated to reflect all current terms and definitions found across the suite of PCTF documentation.

To learn more about the Pan-Canadian vision and benefits-for-all value proposition please review the Pan-Canadian Trust Framework Overview.

Document Status: These review documents have been developed by members of the DIACC’s Trust Framework Expert Committee (TFEC) who operate under the DIACC controlling policies and consist of representatives from both the private and public sectors. These documents have been approved by the TFEC for public comment.

Summaries:

The PCTF Law Society Profile is the first industry-focused profile of the PCTF and is intended to help regulated lawyers make informed decisions on how best to adopt digital trust services and solutions for things like remote client verification and fraud reduction.

The PCTF Glossary provides definitions and examples for terms that appear across DIACC PCTF documentation, to ensure all stakeholders have a shared and consistent understanding of terms used in the context of the PCTF. As terms and usage can vary across industry, the Glossary is recommended reading for anyone wanting a strong baseline understanding of the PCTF.

Invitation:

All interested parties are invited to comment

Period:

Opens: February 3, 2025 at 23:59 PT | Closes: March 20, 2025 at 23:59 PT

When reviewing the Law Society Profile Conformance Criteria, please consider the following and note that responses to this question are non-binding and serve to improve the PCTF.

Would you consider the Conformance Criteria as auditable or not? That is, could you objectively evaluate if an organization was compliant with that criteria and what evidence would be used to justify that?

Review Document: PCTF Law Society Profile

Draft Recommendation V1.0 DIACC Comment Submission Spreadsheet

Review Document: PCTF Glossary

Final Recommendation V1.1 DIACC Comment Submission Spreadsheet

Intellectual Property Rights:

Comments must be received within the comment period noted above. All comments are subject to the DIACC contributor agreement; by submitting a comment you agree to be bound by the terms and conditions therein. DIACC Members are also subject to the Intellectual Property Rights Policy. Any notice of an intent not to license under either the Contributor Agreement and/or the Intellectual Property Rights Policy with respect to the review documents or any comments must be made at the Contributor’s and/or Member’s earliest opportunity, and in any event, within the comment period. IPR claims may be sent to review@diacc.ca. Please include “IPR Claim” as the subject.

Process:

All comments are subject to the DIACC contributor agreement. Submit comments using the provided DIACC Comment Submission Spreadsheet. Reference the corresponding line number for each comment submitted. Email completed DIACC Comment Submission Spreadsheet to review@diacc.ca. Questions may be sent to review@diacc.ca.

Value to Canadians:

The DIACC’s mandate is to collaboratively develop and deliver resources to help Canadians to digitally transact with security, privacy, and convenience. The PCTF is one such resource and guides the digital trust and identity verification ecosystem interoperability by putting policy, standards, and technology into practice aligning with defined levels of assurance. The DIACC is a not-for-profit coalition of members from the public and private sector who are making a significant and sustained investment in accelerating Canada’s Identity Ecosystem.

Context:

The purpose of this review is to ensure transparency in the development and diversity of a truly Pan-Canadian, and international, input. In alignment with our Principles for an Identity Ecosystem, processes to respect and enhance privacy are being prioritized through every step of the PCTF development process.

DIACC expects to modify and improve these Recommendations based upon public comments. Comments made during the review will be considered for incorporation into the next iteration and DIACC will prepare a Disposition of Comments to provide transparency with regard to how each comment was handled.


Blockchain Commons

SSI Orbit Podcast: Christopher Allen Interview

On January 31, Christopher Allen spoke with Mathieu Glaude at the SSI Orbit Podcast on the controversial topic “Has Our SSI Ecosystem Become Morally Bankrupt?”. Following the video link, below, are Christopher’s Musings on the topic, overviewing some of the material from the video. The legitimacy of the modern self-sovereign identity (SSI) industry is a vital question to ask, because in the last ye

On January 31, Christopher Allen spoke with Mathieu Glaude at the SSI Orbit Podcast on the controversial topic “Has Our SSI Ecosystem Become Morally Bankrupt?”. Following the video link, below, are Christopher’s Musings on the topic, overviewing some of the material from the video.

The legitimacy of the modern self-sovereign identity (SSI) industry is a vital question to ask, because in the last year I’ve become to wonder if DIDs and VCs could actually lose. In my opinion, the main threat is that we made compromises in the development of these new self-sovereign technologies that led to them not being differentiated from centralized identity. We betrayed our principles.

But that was actually a pretty small portion of a wide-ranging discussion of identity between myself and Mathieu that I hope you’ll listen to. It also covered my history, the history of SSI, the ideals of SSI, how we’re losing them, and where that could lead us. (Spoiler: it’s not a good place.)

“We’ve seen what happens when you have too much control in a single authority … and we’re forgetting those lessons. We have to minimize the risks when [authorities] are wrong or their powers are abused.”

My own history in the cryptography & identity field dates back to meeting the founders of the Xanada Project, working with RSA, writing SSLRef, coauthoring TLS, and working with elliptic curves. This history all offers ideas for digital identity! For example, Xanadu was one of the inspirations for my ideas of edge identifiers & cryptographic cliques that I recently released.

I also talked about how self-sovereign identity, DIDs, and VCs all matured at Rebooting the Web of Trust. Early on, we worked with experts from the UN and people working with the refuge crisis. We had all the right ingredients! But it’s so easy to lose the ideals behind a technology. Groupware lost its collaborative processes. TLS became centralized. We had ten principles for self-sovereign identity, but they’ve been somewhat lost.

“Somehow we disconnect our words from where the real power is, which is anti-coercion and anti-violence.”

I always intended those principles to be a starting point. I was surprised when folks at RWOT weren’t interested in reviewing and revising them. Now, that disinterest has resulted in SSI technology that’s losing against MDLs and other centralized tech. Part of the problem is that privacy doesn’t seem important to people. I think it’s a vocabulary disconnect. Privacy allows us to avoid violence and coercion. Maybe those are the words we need to concentrate on!

Unfortunately, we’ve seen what happens if we walk this path. Jacobus Lentz’s enthusiastic identity work leading up to WWII led to genocide in the Netherlands. That’s what happens when identity is centralized.

We need to talk more about all of these aspects of identity, so that the digital identity of the 21st century protects our basic human rights. I hope you’ll join me in discussion here or in the Youtube comments.

“Articulate your values, your own principles, and begin to evaluate your work, your priorities through those lenses. You may find that a very small difference in the choices you make could have big impact because they were informed by your values.”

If you prefer text articles, I’ve linked to a number of my articles on these topics below:

Echoes from History Echoes from History II Has our SSI Ecosystem Become Morally Bankrupt? How My Values Inform Design The Origins of Self-Sovereign Identity The Path to Self-Sovereign Identity Edge Identifiers & Cliques Open & Fuzzy Cliques

Monday, 03. February 2025

FIDO Alliance

Onboarding the Future: Guide for Edge Deployment with FIDO Device Onboard (FDO)

Why You Should Consider the FDO Standard for Zero-Trust Device Onboarding 1. Executive Summary IoT and edge computing solutions are exploding as manufacturers are looking for new ways to modernize […]

Why You Should Consider the FDO Standard for Zero-Trust Device Onboarding

1. Executive Summary

IoT and edge computing solutions are exploding as manufacturers are looking for new ways to modernize their operations and accelerate production. By 2025, over 75 billion IoT devices will be connected globally. The industrial IoT market, which spans industries like manufacturing, healthcare, and retail is valued at USD 194 billion in 2024 and is projected to reach USD 286 billion by 2029. This surge unlocks immense opportunities and innovation for businesses and manufacturers alike. However, keeping up with the pace of demand for these devices and deploying them sustainably has created unprecedented challenges.

Namely, two key factors have the potential to derail the edge revolution entirely:

Costly and inefficient installation processes Security vulnerabilities Download the FDO Guide 2. What is Onboarding?

When an edge node or IoT device is installed in a facility, the device must be onboarded to its management platform hosted in the building or in the cloud.

The Onboarding Challenge

Device onboarding at scale is expensive and can introduce significant risks if not done properly. Meanwhile, typical manual onboarding processes and default passwords have created severe vulnerabilities, with 57% of IoT devices vulnerable to medium or high-severity attacks.

Nearly half (48%) of critical infrastructure security leaders reported experiencing at least one major security impact due to a compromised device within the last year.

What is FIDO Device Onboarding (FDO)?

FIDO Device Onboard (FDO) is a revolutionary standard designed to simplify, secure, and automate the onboarding process for IoT and edge devices. FDO simplifies device onboarding in edge and  IoT computing environments with a plug and play, zero trust approach embedded in the specification. Developed by industry leaders like Arm, Amazon, Google, Intel, Microsoft and Qualcomm, the specification is one of the first openly available standards designed specifically to solve edge and  IoT onboarding challenges: time-intensive, complex manual processes, high costs, and security vulnerabilities. It is targeted at industrial, medical, automotive, IT and retail use cases and is complemented by an independent certification program. 

3. The Overlooked Opportunity and Risk

With the introduction of AI, a new layer of complexity was added to the edge challenge. Organizations are now hyper-focused on AI adoption and its promise of smarter, faster, and more efficient operations, but without addressing foundational IoT security, these ambitions are at risk of being undermined.

FIDO Alliance Device Onboard (FDO) provides the answer, offering a zero trust, plug and play standard that accelerates deployments while safeguarding infrastructure. In today’s challenging economic climate, automating zero-touch device onboarding enables leaders to deliver ambitious digital transformation projects with limited resources and budgets, saving installation costs, accelerating time-to-value, and improving security. FDO is an open standard that allows users to innovate. FDO’s zero-trust approach is an important piece of the IoT security puzzle and sets the stage for future AI updates inside protected enclaves.

Which industries benefit from FDO? IndustriesAutomotiveHealthcareChemicalManufacturingConsumer goods manufacturingOil and GasEnergyRetailEducationSupply chain and logisticsEnterprise and NetworkingTelecommunications What Device Types Can Be Enabled with FDO? Device TypesExamplesIoT sensors and devicesTemperature sensorsPressure sensors Motion detectorsWater quality monitorsSmart thermostatsConnected industrial equipmentSmart camerasWi-Fi-enabled cameras
Camera systemsEdge serversEdge serversMobile edge computing (MEC) serversNetworking equipmentRoutersSwitches Gateways5G small cellsIndustrial PCs and controllersIndustrial PCsProgrammable logic controllers (PLCs)Human-Machine Interfaces (HMIs)

Just as passkeys revolutionized user authentication, FDO is transforming device onboarding in edge computing and IoT environments.

Key Features of FDO

The key features of FDO include:

Late binding: Late binding saves money and time as FDO-enabled devices can be onboarded to any platform without the need for unique customization. This reduces the number of device SKUs needed versus other onboarding solutions. It ensures devices are authenticated and provisioned properly for the device recipient after ownership is verified. Plug and Play: Whereas manual onboarding requires expensive, skilled technicians, FDO is highly automated, often allowing semi-skilled staff to carry out the installation. This is important in markets such as retail where FDO will allow the store manager to do the installation rather than needing to bring in an expensive IT expert. Ownership voucher: Device ownership is established and transferred securely in the supply chain with the “ownership voucher,” which uses cryptographic authentication protocols in the FDO specification to verify the device recipient’s physical and digital ownership. Zero-touch and zero trust: Combined, these attributes establish a zero trust approach that covers end-to-end device onboarding using embedded, cryptographic protocols, and sequential processes to perform initial onboarding actions securely and quickly. The zero trust strategy covers both the device and the management platform during the onboarding process. FDO for AI and Additional Features

FDO is designed to permit a secure subsystem to onboard independently and securely from the rest of the system. This makes FDO an excellent candidate for updating AI models deployed in edge secure enclaves from a cloud repository.

Additional features include:

Interoperability with OPC Unified Architecture (OPC UA) Wi-Fi ready Flexible configurations for cloud, multi-cloud, and closed network environments with multi-tenant and cloud servers Multiple open source implementation methods available 3. FDO Certified Products

The FIDO Alliance is an open industry association with a mission to reduce the world’s reliance on passwords. Consisting of the biggest global tech organizations and experts in cybersecurity, identity, and authentication, the alliance has a proven track record in transforming consumer authentication with passkeys. 

In two years since the initial launch, passkeys have been enabled on 20% of the world’s top 100 websites and over 15 billion accounts.

The FIDO Alliance has launched this complementary independent certification program that brings additional value to end users and solution providers alike. It assures that FDO certified solutions meet all the specifications, that devices comply with all security requirements, and have been tested for interoperability with other products. 

FDO Certified products bring considerable additional value to end users by offering:

Guaranteed interoperability and security assurance Faster deployments and time to value  Greater efficiencies Assures security and interoperability, eliminating the need for time-consuming vendor bake-offs with uncertified or homebrewed onboarding solutions

Now FIDO is applying this expertise to improve device authentication in industrial IoT and edge computing environments. FDO ensures devices and edge nodes can quickly and securely authenticate and connect online during initial deployment.

4. On the Edge: The Urgency to Secure and Simplify Device Security

Operational bottlenecks are a significant challenge in both industrial and commercial sectors. Manual, unsecured device onboarding not only consumes time and resources but also increases the risk of breaches. According to Microsoft’s recent white paper, How to Scale Intelligent Factory Initiatives Through an Adaptive Cloud Approach, today’s manufacturing leaders are burdened with “technical sprawl and inefficiencies that create major obstacles to being able to scale solutions – including AI – to multiple production lines and geographically dispersed factories.”

This technical sprawl has led to data silos and management complexities, hindering global visibility and scalability. Ultimately, this prohibits the promise of connected devices from being realized in any industry. 

The average cost of a data breach in 2023 was $4.88 million (USD).

Edge implementations involve a lot of risk. Often these edge nodes are used in remote, precarious, and high-risk environments. Industries like healthcare, energy, and manufacturing face unique challenges and regulations, such as vulnerable patient monitoring systems, hazardous environments, and risks to complex supply chains. To make matters more complex, new threats are constantly emerging, such as the rise of quantum computing and zero-day exploits.

Some companies may feel that they can develop their own proprietary onboarding solution, but given today’s economic pressures and the growing threat landscape, businesses often simply cannot afford to develop and maintain proprietary solutions or risk a preventable breach.

FDO and AI: A Symbiotic Future

Edge and IoT are also the “eyes and ears” of AI, collecting and transmitting data for analysis. There is a huge risk in overlooking IoT security and threats such as data poisoning, which can cripple AI models reliant on real-time data. Securing the foundation of edge and IoT is essential to unlock the full potential of AI.

AI systems depend on clean, reliable data streams. A compromised IoT device does not just threaten the device itself – it can corrupt AI models, disrupt decision-making, and open doors to adversarial attacks. FDO’s zero trust onboarding ensures these vulnerabilities are eliminated from the start.

5. What Problems Does FDO Solve? Human error: 34% of data breaches involve human error – FDO minimizes this with automation and a zero-touch approach.  Time-intensive and inefficient deployments: FDO can deploy 10 times faster than manual methods. It dramatically reduces the time and budget needed to hire skilled technicians in high-risk environments, like oil rigs and factories. In some applications, such as retail, existing on-site staff can install FDO as it is plug and play technology. Market speed to innovation: Open standards help advance innovation and level the competitive playing field. By standardizing processes, providers can focus on truly adding value to their solutions. For customers, they can benefit from better solutions that are faster to deploy and more secure. Device Security Risks  – The Supply Chain Lifecycle

Stage 1: Manufacturing
Risk: Supply chain compromises (i.e., tampered devices)
FDO: Establishes cryptographic ownership during manufacturing, ensuring device integrity

Stage 2: Shipment and storage
Risk: Device ownership asset mismanagement
FDO: Tracks and secures ownership transfers, maintaining a secure chain of custody

Stage 3: Onboarding and deployment
Risk: Exposures from default passwords and manual installation errors
FDO: Eliminates passwords and human errors with plug and play device onboarding and zero-touch automation

Stage 4: Operations 
Risk: Insecure data transmission, spoofing and infiltration
FDO: Encrypts data exchanges and ensures ongoing device authentication 

6. Benefits of FDO for Enterprises and Providers

Standards are vital to unlocking the full potential of any major global technology innovation. Global industry standard initiatives help remove huge amounts of waste, advance technology far more quickly, and increase market competitiveness. Standards also provide long-term security. As threats evolve, experts in the field continue to evolve the standards to keep up.

The FDO standards have been developed and backed by the best companies in the industry, including Microsoft, Dell, and Intel. Experts from these organizations proactively work together to develop use cases and best practices for seamless and secure IoT device authentication, provisioning, and also support the adoption and implementation of the FDO standard.

The FDO standard is also continuously improved within the FIDO Alliance. In the last two years, several Application Notes have been released to deal with implementation and other areas related to FDO 1.1. The newest version of the standard, FDO 1.2, is currently in development with new enterprise-ready features and is expected to be released in 2025.

Benefits for enterprises:

Protect devices and supply chains with zero trust security. Integration is flexible with existing systems.  Reduce the need to develop and manage your own testing requirements and protocols – buy with confidence with FDO certified products.  Reduce time to market/deployment and increase value.

Benefits for providers:

Leverage FDO certification as a competitive advantage. Ensure compatibility and earn customer trust with external independent validation. This becomes increasingly valuable as market adoption rises and FDO is increasingly referenced in Requests for Proposals (RFPs). Realize hardware efficiencies, simplify production, and reduce waste. As with FDO, operating systems can be deployed on-site and do not need to be hard programmed in. This capability is now part of an active workstream within the FDO Working Group called “Bare Metal Onboarding”. Fast-track solution development with confidence. Free engineer time to focus on higher value projects rather than waste time with manual or proprietary onboarding solutions. Offer a faster, more efficient solution to customers.

“Deploying FDO has marked a pivotal shift for ASRock Industrial, establishing a new benchmark in secure, scalable onboarding for industrial edge IoT solutions. FDO’s advanced security framework enables us to deliver unparalleled reliability and adaptability, empowering our clients to scale confidently in increasingly complex environments. This deployment cements ASRock Industrial’s leadership in industrial computing security and sets the stage for us to shape the future of Industry 4.0 with solutions that are both resilient and future-ready” – Kenny Chang, Vice President, ASRock Industries

7. How to Adopt FDO Today

FDO offers a simple, secure, and scalable solution for enterprises and providers to accelerate edge computing and IoT device deployment at scale. With proven benefits like streamlined procurement, reduced costs, and enhanced security, FDO offers a clear path to efficiency and innovation – even in complex, high-risk, distributed environments. 

Now is a perfect time to join industry leaders like Microsoft, Dell, Red Hat, and Intel in backing FDO and paving the way for wider adoption.

There are several ways to get involved with FDO with the FIDO Alliance:

Explore: Discover FIDOⓇ Certified FDO products for seamless device onboarding. Get certified: Learn how to get FDO certified and demonstrate your products meet global security and interoperability standards. Join the FIDO Alliance: Become a FIDO Alliance member and help shape the future of the FDO standard.


The technology, resources, and support are in place for FDO to transform the way leaders and teams deploy IoT devices at scale while managing edge security risks in today’s fast-paced economy.

References Industrial IoT Market Forecast to 2029 Research and Markets Report. https://www.researchandmarkets.com/report/industrial-iot Palo Alto Unit 42 IoT Report. https://start.paloaltonetworks.com/unit-42-iot-threat-report Verizon 2024 Mobile Security Index. https://www.verizon.com/business/resources/reports/mobile-security-index/ Microsoft: How to Scale Intelligent Factory Initiatives Through an Adaptive Cloud Approach. https://clouddamcdnprodep.azureedge.net/gdc/gdcQll0SB/original IBM 2024 Cost of a Data Breach Report. https://www.ibm.com/reports/data-breach Verizon 2024 Data Breach Investigations Report. https://www.verizon.com/business/resources/reports/dbir/

MobileIDWorld: Passkey Adoption Surges 550% in 2024 as Bitwarden Reports 1.1M New Implementations

The adoption of passkeys in digital authentication has shown significant growth throughout 2024, building on the momentum started by major platform providers in their shift away from traditional passwords. While falling short […]

The adoption of passkeys in digital authentication has shown significant growth throughout 2024, building on the momentum started by major platform providers in their shift away from traditional passwords. While falling short of earlier predictions of 15 billion accounts, password management provider Bitwarden reported a 550 percent increase in daily passkey creation in December 2024 compared to the previous year, with approximately 1.1 million passkeys created in Q4 alone.

Industry-wide implementation of passkeys has expanded substantially, with PasskeyIndex.io documenting an increase from 58 to 115 services supporting passkeys during 2024. The growth follows significant developments in the FIDO2 authentication landscape, including new research revealing both strengths and potential vulnerabilities of synced passkeys. Bitwarden’s reach now extends to more than 180 countries and 50 languages, serving over 50,000 business customers globally.


Velocity Network

Velocity Network Foundation – New Year Updates

The post Velocity Network Foundation – New Year Updates appeared first on Velocity.

DIF Blog

Announcing Dr. Carsten Stöcker as DIF Ambassador

Announcing Dr. Carsten Stöcker as DIF Ambassador

DIF is excited to announce the appointment of Dr. Carsten Stöcker as DIF Ambassador. As founder and CEO of Spherity GmbH, Dr. Stöcker brings pioneering experience in implementing decentralized identity across industrial ecosystems. His expertise in developing Verifiable Digital Product Passports for regulated industries and leadership in bridging European Digital Identity initiatives with Industry 4.0 applications makes him uniquely qualified to advance DIF's mission.

A physicist by training with a Ph.D. from the University of Aachen, he has served as a Council Member of Global Future Network for the World Economic Forum and Chairman of IDunion SCE. Through his work at Spherity, he has pioneered secure identity solutions across enterprises, machines, products, and even algorithms, with particular focus on highly regulated technical sectors requiring stringent compliance processes.

Vision and Focus Areas

As DIF Ambassador, Dr. Stöcker will help strengthen DIF's mission of developing secure, interoperable standards for privacy-preserving identity ecosystems, fortifying DIF’s role as a hub for innovation. His work will focus on digital identity convergence, bridging EUDI Wallets with DIF standards and Industry 4.0 to create seamless, secure, and compliant identity solutions. This includes establishing decentralized identity as the backbone of industrial ecosystems, supporting automation, trust, and circular economy initiatives.

A key focus will be driving adoption of Verifiable Digital Product Passports in regulated industries such as pharmaceuticals, batteries, and automotive, enabling robust traceability, compliance, and sustainability verification. Dr. Stöcker will also work on ensuring interoperability between decentralized identity standards and European Digital Identity Wallets for cross-border organizational applications.

Industry Impact and Future Direction

"Decentralized identity is instrumental for building trust in global supply chains, regulatory compliance, and enabling the future of Industry 4.0," says Dr. Stöcker. His leadership will strengthen DIF's mission to make decentralized identity the foundation for a more secure and interoperable digital world.

Join us in welcoming Dr. Stöcker as DIF Ambassador. Subscribe to our blog to stay updated on Dr. Stöcker's work advancing decentralized identity standards for Industry 4.0 and digital product passports. For more information about DIF initiatives and getting involved, visit our website


We Are Open co-op

A Constructive Approach to AI Literacies

Introducing AILiteracy.fyi We Are Open Co-op (WAO) is a collective of individuals who share a commitment to ethical, inclusive, and sustainable practices in all aspects of our work, including AI literacy. Our founding members are well-versed in this area. I wrote my thesis on the concept of digital literacies, while Laura Hilliger wrote her thesis on web literacy. Along with John Bevan, we worke
Introducing AILiteracy.fyi

We Are Open Co-op (WAO) is a collective of individuals who share a commitment to ethical, inclusive, and sustainable practices in all aspects of our work, including AI literacy.

Our founding members are well-versed in this area. I wrote my thesis on the concept of digital literacies, while Laura Hilliger wrote her thesis on web literacy. Along with John Bevan, we worked on the Mozilla Webmaker programme, and more recently have responded to a UNESCO call on for definitions of AI and data literacy, and published a paper on what media and information literacy mean in today’s connected world.

We also have an upcoming paper with Friends of the Earth around Harnessing AI for Environment Justice, due to be published soon.

Screenshot of AILiteracy.fyi

Digital literacies are plural and context-dependent, which means that AI literacies, as a subset of digital literacies, are too. There is no single “digital literacy” or “AI literacy” but rather a series of behaviours, practices, and habits of mind that change over time. These also depend on the context in which we are operating: this can include age, development, and experience; it can also include working environment, sector, and legislative frameworks.

WAO’s approach to AI literacies is informed by constructivism, pragmatism, and systems thinking, ensuring that our efforts are placed within intersecting historical, social, and technological contexts.

The Eight Elements

I wrote my doctoral thesis on digital literacies because I was interested in what the concept of ‘literacy’ means when we replace pen and paper with screens. Over the course of that investigation, I investigated multiple overlapping and competing definitions of ‘digital literacy’, eventually coming up with eight ‘essential elements’ of digital literacies. These elements are a starting point for a discussion which is dependent upon context. For example, digital literacies within a primary school classroom are very different to security researchers attempting to prevent the spread of misinformation within social networks.

The eight elements are directly applicable and offer ways to engage with AI technologies. This approach ensures that AI literacies are considered from a more holistic point of view, focusing as much on the human aspects as the technological. Critical engagement is important: just taking something off the shelf and attempting to apply it to a particular context is unlikely to meet with success. You have to do the work of contextualisation, which is best done through dialogue.

Applying the 8 elements to ‘AI Literacies’ Cultural — Understanding AI’s impact on society, including cultural norms and media influence Cognitive — Evaluating AI outputs critically, understanding how AI works, and using AI tools in analytical tasks Constructive — Learning to create and modify AI tools, and applying AI in creative projects Communicative — Engaging with AI systems effectively, understanding AI’s role in communication platforms, and its impact on human emotions, relationships, and social well-being Confident — Building self-assurance in using AI technologies and interfaces Creative — Exploring innovative uses for AI and thinking ethically about AI design Critical — Questioning power dynamics, biases, ethical issues, and considering the environmental impact of AI. Civic — Examining AI’s role in governance, policy, social good initiatives, legal awareness, and economic implications.

Note that these definitions are slightly different to, for example, Angela Gunder’s Dimensions of AI Literacies which are also based on the eight essential elements approach. This is to be expected!

Our approach to AI literacies is grounded in the belief that AI literacies are part of digital literacies, not a separate field. We aim to demystify AI, helping people recognise that many of the skills they already possess are directly applicable to AI. Rather than actively promoting a single “AI literacy,” we seek to raise awareness of its significance and contribute thoughtfully to the ongoing discussion. Our goal is to share perspectives and resources that enable individuals to engage with AI critically and responsibly.

Next steps

Laura has an upcoming series of posts relating to our work with Friends of the Earth which we will share in multiple parts. You may also be interested in how to cooperate through the use of AI with your team.

If you need a thought partner for this kind of work, we’ve got the credentials, the experience, and the interest to help! Get in touch for a chat.

A Constructive Approach to AI Literacies was originally published in We Are Open Co-op on Medium, where people are continuing the conversation by highlighting and responding to this story.


DIF Blog

DIF Newsletter #48

February 2025 DIF Website | DIF Mailing Lists | Meeting Recording Archive Table of contents Decentralized Identity Foundation News; 2. Working Group Updates; 3 Special Interest Group Updates; 4 User Group Updates; 5. Announcements; 6. Community Events; 7. DIF Member Spotlights; 8. Get involved! Join DIF 🚀 Decentralized Identity Foundation News DIF&

February 2025

DIF Website | DIF Mailing Lists | Meeting Recording Archive

Table of contents Decentralized Identity Foundation News; 2. Working Group Updates; 3 Special Interest Group Updates; 4 User Group Updates; 5. Announcements; 6. Community Events; 7. DIF Member Spotlights; 8. Get involved! Join DIF 🚀 Decentralized Identity Foundation News

DIF's working groups and forums kicked off January with a flurry of activity, including:

Proof of Age Special Workshop on January 28th, featured in Biometric Update [Read the article] DIF Labs prepares for its first cohort Show-and-Tell DIDComm User Group added European and APAC-friendly meeting times

Details for these and more to follow in soon-to-be-released posts, so stay turned!

🛠️ Working Group Updates DID Methods Working Group

The DID Methods Working Group is making progress towards establishing selection criteria and accepting proposals for DID methods. They are evaluating mechanisms for measuring decentralization of methods, and recently evaluated the potential of self-certifying identifier methods.

DID Methods meets bi-weekly at 9am PT/ noon ET/ 6pm CET Wednesdays

Identifiers and Discovery Working Group

The Identifiers and Discovery Working Group, along with its subgroups focusing on DID Traits and did:webvh, continues to make substantial progress towards specification readiness. They reviewed the DID Web VH specification and implementation, explored the use of DIDs in IoT devices and enterprise communities, and worked on verification processes. The DID Traits team focused on finalizing specifications for their 1.0 release and addressed government-approved cryptography standards.

Identifiers and Discovery meets bi-weekly at 11am PT/ 2pm ET/ 8pm CET Mondays

🪪 Claims & Credentials Working Group

The Credential Schemas Work Item held a special workshop on Proof of Age on January 28th, where the team invited the public to explore standardization of age verification schemas and discussed privacy-preserving solutions. A special report-out will follow.

Credential Schemas work item meets bi-weekly at 10am PT/ 1pm ET/ 7pm CET Tuesdays

Applied Crypto Working Group

The general Applied Crypto Working Group restarted at the end of January to focus on developing a trust model for ZKP self-attestations. After an initial evaluation of Anon Aadhaar, they're ready to work on a general framework that can be applied across different implementations

The Crypto BBS+ Work Item group maintained steady progress throughout January, with weekly meetings focusing on blind signatures and pseudonyms. The team worked on refining API designs and explored the potential of post-quantum privacy, while also addressing the need for test vectors and updates to working group specifications.

BBS+ work item meets weekly at 11am PT/ 2pm ET/ 8pm CET Mondays
Applied Crypto Working Group meets bi-weekly at 7am PT/ 10am ET/ 4pm CET Thursdays

DIF Labs Working Group

The initial Labs cohort demonstrated substantial progress, with the recent meeting featuring mentor review in preparation for the February show and tell session. More details to follow.

DIF Labs meets on the 3rd Tuesday of each month at 8am PT/ 11am ET/ 5pm CET

DIDComm Working Group

The DIDComm Working Group is considering moving its usual meeting tome to accommodate EU particiants. They are discussing collaboration with the Trust Spanning Protocol (TSP).

DIDComm Working Group meets the first Monday of each month noon PT/ 3pm ET/ 9pm CET

If you are interested in participating in any of the Working Groups highlighted above, or any of DIF's other Working Groups, please click join DIF.

🌎 DIF Special Interest Group Updates
DIF Hospitality & Travel SIG

The team advanced work on a standardized travel profile schema, focusing on multilingual support and international data handling requirements. A major highlight was the January 30th session featuring presentations from SITA and Indicio, who demonstrated successful implementation of verifiable credentials in travel, including a pilot program in Aruba.

Key developments included:

Progress on JSON schema development for standardized travel profiles Advancement of multilingual and localization capabilities Refinement of terminology and glossary for industry standardization Demo of successful verifiable credentials implementation in live travel environment

Meetings take place weekly on Thursdays at 10am EST. Click here for more details

DIF China SIG

The China SIG is growing to a vibrant community, with over 140 people in the discussion group. In 2024 they organized 9 online meetings and invited different DID experts for discussions, including experts from GLEIF, DIF, and TrustOverIP.

Click here for more details

APAC/ASEAN Discussion Group

The DIF APAC call takes place Monthly on the 4th Thursday of the month. Please see the DIF calendar for updated timing.

DIF Africa SIG

Meetings take place Monthly on the 3rd Wednesday at 1pm SAST. Click here for more details

DIF Japan SIG

Meetings take place on the last Friday of each month 8am JST. Click here for more details

📖 DIF User Group Updates
DIDComm User Group

The DIDComm User Group established additional meeting times to accommodate global participation. They worked on expanding their reach and planned engagement with Trust Spanning Protocol representatives, while also focusing on improving documentation and accessibility.

There are two meeting series to accommodate different time zones, each taking place every Monday except the first week of the month (which is reserved for DIDComm Working Group). Click here for more details.

Veramo User Group

Meetings take place weekly on Thursdays, alternating between Noon EST / 18.00 CET and 09.00 EST / 15.00 CET. Click here for more details

📢 Announcements at DIF

Conference season is kicking into high gear. Explore our Events calendar to meet the DIF community at leading Decentralized Identity, Identity, and Decentralized Web events.

🗓️ ️DIF Members Member Spotlight: Nuggets

Nuggets, a DIF member company, is pioneering innovative solutions at the intersection of AI and digital identity. In a recent interview, CEO Alastair Johnson details how their new Private Personal AI and Verified Identity for AI Agents products are tackling critical privacy and security challenges in AI adoption. Through decentralized identity wallet technology, Nuggets enables users to maintain complete control over their personal data while interacting with AI systems, while also providing verifiable digital identities for AI agents to ensure accountability and trust. Read the full interview.

Member Spotlight: The Camino Network Foundation

The Camino Network Foundation, a Switzerland-based non-profit, is revolutionizing the global travel industry through its specialized Layer 1 blockchain infrastructure. In a recent DIF member spotlight, they discuss their mission to tackle key industry pain points including high distribution costs, inefficient payment processes, and lengthy market entry times. Through their blockchain-based ecosystem and self-sovereign identity solutions, Camino aims to create seamless, secure travel experiences while reducing fraud and protecting user privacy. Read the full interview.

👉Are you a DIF member with news to share? Email us at communication@identity.foundation with details.

🆔 Join DIF!

If you would like to get in touch with us or become a member of the DIF community, please visit our website or follow our channels:

Follow us on Twitter/X

Join us on GitHub

Subscribe on YouTube

🔍

Read the DIF blog

New Member Orientations

If you are new to DIF join us for our upcoming new member orientations. Find more information on DIF’s slack or contact us at community@identity.foundation if you need more information.


Oasis Open

Invitation to comment on DMLex Version 1.0 before call for consent as OASIS Standard

OASIS and the OASIS Lexicographic Infrastructure Data Model and API (LEXIDMA) TC [1] are pleased to announce that Data Model for Lexicography (DMLex) Version 1.0 is now available for public review and comment. The LEXIDMA TC’s purpose is to create an open standards based framework for internationally interoperable lexicographic work. The TC will develop a […] The post Invitation to comment on DM

Comment period ends April 1st

OASIS and the OASIS Lexicographic Infrastructure Data Model and API (LEXIDMA) TC [1] are pleased to announce that Data Model for Lexicography (DMLex) Version 1.0 is now available for public review and comment.

The LEXIDMA TC’s purpose is to create an open standards based framework for internationally interoperable lexicographic work. The TC will develop a simple, modular, and easy to adopt data model that will be attractive for all lexicographic industry actors across companies and academia as well as geographic locations. Adoption of that model will facilitate exchange of lexicographic and linguistic corpus data globally and also enable effective exchange with adjacent industries such as language services, terminology management, or technical writing.

The TC received three Statements of Use from University of Galway, Lexical Computing, and Jozef Stefan Institute [3].

The candidate specification and related files are available here:

Data Model for Lexicography (DMLex) Version 1.0

Committee Specification 01

08 November 2024

PDF (Authoritative):

https://docs.oasis-open.org/lexidma/dmlex/v1.0/cs01/dmlex-v1.0-cs01.pdf

HTML:

https://docs.oasis-open.org/lexidma/dmlex/v1.0/cs01/dmlex-v1.0-cs01.html

Schemas:

JSON: https://docs.oasis-open.org/lexidma/dmlex/cs01/schemas/JSON/

RDF: https://docs.oasis-open.org/lexidma/dmlex/cs01/schemas/RDF/

XML: https://docs.oasis-open.org/lexidma/dmlex/cs01/schemas/XML/

Informative Copies of 3rd Party Schemas:

https://docs.oasis-open.org/lexidma/dmlex/cs01/schemas/informativeCopiesof3rdPartySchemas

For your convenience, OASIS provides a complete package of the prose specification and related files in a ZIP distribution file. You can download the ZIP file at:

https://docs.oasis-open.org/lexidma/dmlex/cs01/dmlex-v1.0-cs01.zip

Members of the LEXIDMA TC [1] approved this specification by Special Majority Vote [2]. The specification had been released for public review as required by the TC Process [4].

Public Review Period

The 60-day public review starts 01 February 2025 at 00:00 UTC and ends 01 April 2024 at 23:59 UTC.

This is an open invitation to comment. OASIS solicits feedback from potential users, developers and others, whether OASIS members or not, for the sake of improving the interoperability and quality of its technical work.

Comments may be submitted to the project by any person through the use of the project’s Comment Facility. Members of the TC should submit feedback directly to the TC’s members-only mailing list. All others should follow the instructions listed here. 

All comments submitted to OASIS are subject to the OASIS Feedback License, which ensures that the feedback you provide carries the same obligations at least as the obligations of the TC members. In connection with this public review  we call your attention to the OASIS IPR Policy [4] applicable especially [5] to the work of this technical committee. All members of the TC should be familiar with this document, which may create obligations regarding the disclosure and availability of a member’s patent, copyright, trademark and license rights that read on an approved OASIS specification.

OASIS invites any persons who know of any such claims to disclose these if they may be essential to the implementation of the above specification, so that notice of them may be posted to the notice page for this TC’s work.

========== Additional references:

[1] OASIS LEXIDMA TC

https://groups.oasis-open.org/communities/tc-community-home2?CommunityKey=0fd41fbb-72be-4771-8faf-018dc7d3f419

[2] Approval ballot:

https://groups.oasis-open.org/higherlogic/ws/groups/0fd41fbb-72be-4771-8faf-018dc7d3f419/ballots/ballot?id=3867

[3] Links to Statements of Use

University of Galway: https://groups.oasis-open.org/viewdocument/statement-of-use-csd01-university?CommunityKey=0fd41fbb-72be-4771-8faf-018dc7d3f419&tab=librarydocuments&MessageKey=63e64b61-90b2-47c0-ac24-6559c31a895d Lexical Computing: https://groups.oasis-open.org/discussion/oasis-lexicographic-infrastructure-data-model-and-api-lexidma-tc-statement-of-use-for-dmlex-cs01-by-lexical-computing Jozef Stefan Institute: https://groups.oasis-open.org/discussion/statement-of-use-for-dmlex-cs01

[4] https://www.oasis-open.org/policies-guidelines/ipr/

[5] https://www.oasis-open.org/committees/lexidma/ipr.php

Intellectual Property Rights (IPR) Policy

The post Invitation to comment on DMLex Version 1.0 before call for consent as OASIS Standard appeared first on OASIS Open.

Thursday, 30. January 2025

Velocity Network

Robert McGough joins Velocity’s board

We're delighted that National Student Clearinghouse's Chris Goodson has been voted onto the Velocity Network Foundation Board of Directors. The post Robert McGough joins Velocity’s board appeared first on Velocity.

Elastos Foundation

Elastos Secures $20M Investment from Rollman Management to Unlock Trillions in Bitcoin Finance

Funding accelerates the development of Elastos’ ELA token, Native Bitcoin DeFi protocol, and Web3 creator economy – positioning Elastos as the utility layer for Bitcoin. Majuro, Marshall Islands – Elastos, a decentralized web infrastructure pioneer, today announced a $20 million strategic investment from Rollman Management to scale its Bitcoin-aligned ecosystem. Rollman Management, recognized for i

Funding accelerates the development of Elastos’ ELA token, Native Bitcoin DeFi protocol, and Web3 creator economy – positioning Elastos as the utility layer for Bitcoin.

Majuro, Marshall Islands – Elastos, a decentralized web infrastructure pioneer, today announced a $20 million strategic investment from Rollman Management to scale its Bitcoin-aligned ecosystem. Rollman Management, recognized for its high-profile investments in blockchain projects like Ripple, Ethereum, Solana, and Planck, now ranks Elastos among its top five holdings. The partnership will fuel the launch of Elastos’ Native Bitcoin DeFi protocol, BeL2, expand its merge-mined ELA token as a Bitcoin reserve asset, and accelerate Elacity—a groundbreaking Web3 data marketplace that enables creators to monetize content without intermediaries on top.

With Bitcoin’s market cap surpassing $2 trillion, Elastos solves critical gaps in Bitcoin’s ecosystem: 

 

ELA as Bitcoin’s Merge-Mined Reserve Asset: ELA tokens are secured by Bitcoin’s hash power through merge-mining since 2018, aligning with Satoshi Nakamoto’s 2010 vision for decentralized networks. With a total of 28,220,000 by 2105 and around 50% of Bitcoin’s hashrate, ELA gains security and decentralization, provides additional revenue for BTC miners at no extra cost, and creates a cryptoeconomically sound reserve asset for Elastos’ Bitcoin-native DeFi system. BeL2: Bitcoin’s DeFi Breakthrough: Launching in Q2 2025, BeL2 allows Bitcoin holders to collateralize BTC in personal wallets and access Ethereum smart contract services. These include minting stablecoins, performing swaps, and borrowing assets peer-to-peer, unlocking its value all whilst eliminating reliance on synthetic BTC (e.g., WBTC) and centralized custodians. BeL2 combines locking scripts, zero-knowledge proofs, oracles, and an arbiter network where ELA stakeholders can stake ELA and earn BTC fees as decentralized nodes to support the protocol. Elacity: Web3’s Creator Revolution: Already proven in early tests, where one creator earned $5,600 in 24 hours through tokenized podcast access, Elacity v2 will launch in April with channels and subscription models. It enables influencers to encrypt, tokenize, and sell content/royalties on Elastos for audio and video markets, with plans to extend its technology to support the tokenization of AI markets. Elastos (ELA) in a nutshell

“Leveraging Bitcoin’s multi-trillion-dollar consensus to empower Web3 users with scalable utilities—that’s where Elastos comes in,” said Rong Chen, Elastos Founder. Merge-mining ties ELA’s security to Bitcoin’s, and BeL2, Elastos’ decentralized finance protocol, unlocks BTC-backed DeFi without compromises, whilst Elacity creates a decentralized digital goods economy on top. Rollman’s investment supports our role as Bitcoin’s utility layer. 

The $20M investment from Rollman will drive the advancement of Elastos technologies and also help Elastos reorient its branding, mature its technological stance, and go to market. This includes enhancing marketing efforts, which will further position Elastos as a leader in the growing Bitcoin-native DeFi space.

Elastos as a Pioneer in Bitcoin-secured Governance

Beyond its technological advancements, Elastos stands out for its Cyber Republic Consensus (CRC) governance model, formalized as a DAO LLC in the Marshall Islands, which signed this agreement with Rollman. This delegate-based system allows community members to stake Bitcoin merge-mined ELA, earn APY, and annually elect—or run as—one of 12 council members who vote on proposals, drive innovation, sign contracts, and validate Elastos’ Smart (EVM) and Identity (DID) sidechains. This ensures governance decisions reflect the community’s interests and demonstrates Elastos’ commitment to a truly decentralized and transparent ecosystem rooted in Bitcoin.

As Elastos enters its next phase of growth, participants can join the ecosystems CRC DAO by acquiring merge-mined ELA, which has a market cap of $48,542,586 and is secured by nearly 50% of Bitcoin’s hashrate (366.01 EH/s, equivalent to 244.008 Frontier Supercomputers). ELA offers 6+ years of proven security, a fixed cap of 28.22M tokens to be fully mined by 2105, and 2% annual inflation rate with 4-year halving cycles, ensuring scarcity and predictability for holders. Available on Centralized Exchanges (Coinbase, KuCoin, Gate.io, Huobi, Bitget, Crypto.com) and Decentralized Exchanges (Uniswap, Chainge Finance, Glide Finance), ELA empowers holders to shape Elastos’ future through CRC governance—driving innovation, reinforcing Bitcoin-level security, and building the next generation of decentralized applications.

 

About Rollman Management Digital

Rollman Management Digital is a private investment and management consulting boutique that is incorporated in the British Virgin Islands. The firm seeks to invest in talented teams and their blockchain protocols to further develop their technology and business while adding significant value to the future of the modern economy.
https://rollmanmanagement.com

Additional Information

Learn more about ELA Merge Mining Learn More about BeL2  Learn More about Elacity Learn More about Cyber Republic Consensus (CRC) Contact info@elastos.org for partnership inquiries or media requests.

Wednesday, 29. January 2025

DIF Blog

Member Spotlight: The Camino Network Foundation

The Camino Network Foundation is a non-profit organization based in Switzerland that has built a blockchain-based ecosystem transforming the global travel industry. We sat down with them to discuss their vision for the future of travel and their partnership with the Decentralized Identity Foundation (DIF) - a collaboration focused on

The Camino Network Foundation is a non-profit organization based in Switzerland that has built a blockchain-based ecosystem transforming the global travel industry. We sat down with them to discuss their vision for the future of travel and their partnership with the Decentralized Identity Foundation (DIF) - a collaboration focused on creating seamless, secure travel experiences through self-sovereign identity.

Camino Network describes itself as 'the travel industry layer one.' What specific travel industry pain points are you aiming to solve?

Camino Network is designed to address key pain points in the $11 trillion global travel industry by providing specialized Layer 1 blockchain infrastructure. Those pain points include:

High Distribution Costs: Current intermediary commissions can reach 20%, significantly cutting into margins. Camino Network enables direct, efficient B2B connections via Camino Messenger, cutting costs. Inefficient Payment Processes: Traditional payment systems often take days to weeks for settlement and impose fees of 3%-8%, eroding travel companies' already thin EBIT margins (typically 1-4%). Camino offers lower-cost, faster payment and settlement solutions, improving cash flow and profitability. Lengthy Time-to-Market: Onboarding travel products can take up to six months. Camino simplifies and accelerates this process, allowing companies to bring offerings to market much more quickly.

Further, Camino provides:

Secure Data Exchange & Identity Verification: Camino provides robust tools for secure data sharing and identity management, helping companies comply with regulations while maintaining user trust and operational efficiency. Tailored Blockchain Governance: Unlike Web2 systems and organizations today, Camino Network is governed by travel companies (over 100 validators), ensuring solutions are directly aligned with industry needs and fostering collaboration.

In summary, Camino Network directly tackles inefficiencies in travel distribution, payments, and data management, enabling companies to operate faster, more securely, and at a lower cost in a competitive global market.

Your white paper details how Self-Sovereign Identity (SSI) could transform travel experiences. Could you share some concrete examples of how SSI could improve the traveler journey, from booking to destination?

Self-Sovereign Identity (SSI) transforms travel along the customer journey by giving travelers full control over their personal data. For example, during the booking process, travelers can securely share data to unlock rewards, like a 10% discount from Sleap.io for Camino (CAM) holders. At the airport, SSI-enabled zero-knowledge proofs (ZKP) streamline passport control while triggering airdropped discount vouchers from airport shops. Upon arrival, travelers can skip the rental car counter, access their vehicle instantly via SSI verification, and check into hotels digitally, eliminating paperwork and delays. These use cases enhance convenience, privacy, and personalization throughout the journey.

You've implemented mandatory KYC for smart contract deployments and KYB for validators. How does this compliance-first approach benefit travel companies looking to adopt blockchain solutions?

Camino's compliance-first approach with mandatory KYC for smart contract deployments and KYB for validators ensures a trusted and professional ecosystem for travel companies. Adhering to regulatory standards enhances legitimacy and simplifies onboarding, reducing risks in global/cross-border collaboration. This fosters confidence among partners and regulators, making blockchain adoption smoother and more secure for the travel industry.

The Camino Messenger protocol is designed to standardize travel industry communication. How do you see decentralized identity integrating with these messaging standards to improve B2B operations?

The Camino Messenger protocol is designed to standardize travel industry communication. Integrating decentralized identity with the Camino Messenger protocol can streamline B2B operations, combat fraud, and boost efficiency. By standardizing the communication and identity verification processes, the travel industry can build a foundation for innovative and automated workflows, leading to secure and personalized offers in the travel industry.

Decentralized identity offers key B2B advantages, including:

Efficiency in Onboarding and Collaboration: DID integration can drastically reduce the time needed for partner onboarding through automation, as well as seamless and granular access control Enhanced Data Privacy and Control: Decentralized identity allows businesses to control how their data is shared within the messaging ecosystem. This is crucial in the travel industry, where sensitive information, such as customer preferences or itineraries, needs to be handled with care.

Your whitepaper mentions authenticating travel service providers through the network. How do you balance the need for trust and verification with the decentralized nature of blockchain?

Balancing trust and verification with the decentralized nature of blockchain requires a thoughtful approach combining transparency, security, and efficiency.

Decentralized Verification: Travel service providers are authenticated using blockchain-based Decentralized Identifier (DID) registries, enabling real-time credential verification without relying on a central authority. This ensures authenticity while preserving decentralization. Decentralized Governance: A Decentralized Autonomous Consortium governs the network (these are the validators in Camino Network), aligning verification processes with the collective needs of industry stakeholders while maintaining trust and efficiency. Transparent Yet Private Verification: Privacy-preserving technologies, like zero-knowledge proofs (zk-SNARKs), enable secure verification of credentials without exposing sensitive data. Decentralized storage further protects sensitive information while allowing access only to authorized parties. Security and Resilience: By leveraging blockchain's decentralized architecture, applications are free from single points of failure, offering robust security and operational continuity even during disruptions.

This approach ensures a secure, transparent, and efficient environment for authenticating travel service providers, fostering trust without compromising blockchain's decentralized principles.

With fraudulent bookings being a major industry issue, how can Camino's identity solutions help travel companies reduce fraud while improving the customer experience?

Fraudulent bookings are minimized as only verified entities can participate in transactions, ensuring trustworthiness. Verifiable credentials issued by trusted organizations will confirm the legitimacy of bookings, reducing chargebacks and fake reservations. The blockchain ensures an immutable audit trail, enabling real-time verification and fraud detection.

The travel industry generates massive amounts of customer data. How does Camino's approach to digital identity help companies manage this data securely while meeting privacy regulations?

Camino’s digital identity approach uses decentralized identity and self-sovereign identity (SSI) principles to give customers control over their data. Verifiable credentials allow users to share only necessary information, minimizing data exposure and ensuring compliance with privacy regulations like GDPR. Sensitive data remains off-chain, while blockchain ensures secure and tamper-proof authentication. Privacy-preserving technologies like zero-knowledge proofs enable verification without revealing personal details. This reduces the risk of data breaches and allows travel companies to manage customer data securely while fostering trust and transparency.

What inspired you to join DIF, and how do you see travel industry standards for digital identity evolving through this collaboration?

Joining the Decentralized Identity Foundation (DIF) was inspired by the need to address the travel industry's challenges with fragmented identity systems and security concerns. By collaborating with DIF, Camino aims to contribute to open standards that ensure interoperability, enabling seamless identity verification across travel providers. This partnership enables innovation, allowing the travel industry to adopt self-sovereign identity solutions, reducing fraud, enhancing privacy, and improving customer trust. Through DIF’s collaborative framework, the travel industry can transition to unified, decentralized identity standards, enabling a more secure, efficient, and customer-centric ecosystem.

Your white paper discusses a 'global future-proof travel operating system.' Looking ahead 5-10 years, how do you envision decentralized identity transforming the travel industry's digital infrastructure? For instance, could you share your vision for how digital passports, biometrics, and SSI might converge to create seamless cross-border travel experiences?

In 5-10 years, we envision travelers carrying a DID wallet containing their passports, visas, health certificates, and frequent traveler profiles. At borders, biometric scans (e.g., facial recognition) would instantly verify identities, matching them with blockchain-secured credentials, allowing seamless passage without physical documents.

For airlines, hotels, and car rentals, SSI would enable pre-verified booking and check-ins using digital IDs, reducing queues and manual verifications. Health credentials (e.g., vaccination records) could be securely shared during pandemics without compromising privacy. This approach would ensure faster, fraud-resistant, and privacy-compliant travel experiences, redefining global travel infrastructure.

Thanks to the Camino Network Foundation for sharing their insights about the future of travel, powered by decentralized identity. To learn more, visit https://foundation.camino.network/


Blockchain Commons

Musings of a Trust Architect: The Case for an International Right to Freedom to Transact

I recently wrote about “How My Values Inform Design”. There I discussed the issue of autonomy and how it can be supported by progressive trust, proof against coercion, and other rights. One of the technical elements that I mentioned as a requirement was “[Tools that] enable meaningful participation in the digital economy.” This is the freedom to transact. It’s a right that remains conspicuously abs

I recently wrote about “How My Values Inform Design”. There I discussed the issue of autonomy and how it can be supported by progressive trust, proof against coercion, and other rights. One of the technical elements that I mentioned as a requirement was “[Tools that] enable meaningful participation in the digital economy.”

This is the freedom to transact. It’s a right that remains conspicuously absent from the foundational rights enshrined in the Universal Declaration of Human Rights (UDHR), which stands as a pillar of human dignity and freedom. But without it, the other rights articulated in the UDHR risk being rendered ineffective or hollow.

It’s so crucial because economic agency forms the bedrock upon which many fundamental freedoms rest. For instance, Freedom of Movement and Residence, core to personal autonomy, become less meaningful when an individual cannot engage in transactions necessary to secure housing or travel. Similarly, the right to property—the ability to own, buy, and sell—is directly dependent on the freedom to transact. Without access to economic exchange, these rights are significantly curtailed, reducing individuals to passive observers rather than active participants in their own lives.

Consider constitutional liberties like Freedom of Expression and Peaceful Association. These rights presuppose economic participation: the ability to rent venues, purchase communication tools, and access the materials necessary for organizing and disseminating ideas. Without the economic means to support these activities, these freedoms are stripped of their practical utility.

The introduction of an international right to Freedom to Transact would bolster the entire framework of human rights by guaranteeing that individuals can exercise their freedoms without undue restrictions on their economic autonomy. It would ensure that human dignity, as envisioned in the UDHR, is not constrained by arbitrary barriers to economic agency. By codifying this right, we would affirm that economic freedom is as essential to the human condition as freedom of thought, religion, or expression.

This proposed right would also address systemic inequalities and empower marginalized communities by ensuring that all individuals, regardless of nationality, socioeconomic status, or geographic location, can engage fully in the global economy. In doing so, we cement the idea that economic agency is not a privilege but a fundamental human right.

As the world grapples with digital transformation, financial innovation, and increasing geopolitical complexities, the necessity of a Freedom to Transact has never been clearer. It is time to elevate this principle to its rightful place alongside the other freedoms in the UDHR, securing a more equitable and dignified future for all.

Tuesday, 28. January 2025

Digital ID for Canadians

Treefort Achieves DIACC Certification, Elevating Identity Verification Standards Across Canada

Toronto, January 27, 2025 — The Digital ID and Authentication Council of Canada (DIACC) proudly announces that Treefort has received the prestigious Pan-Canadian Trust Framework…

Toronto, January 27, 2025 — The Digital ID and Authentication Council of Canada (DIACC) proudly announces that Treefort has received the prestigious Pan-Canadian Trust Framework (PCTF) Verified Person certification for its groundbreaking Identity Verification (IDV) platform. This certification, focused on the PCTF’s Verified Person Component, positions Treefort as a frontrunner in delivering secure, trusted, and reliable identity verification services, which are essential in the fight against fraud in Canada.

In an era increasingly plagued by misinformation, deepfakes, and questions about the integrity of information, Treefort’s innovative IDV solution shines as a beacon of trust. With technology designed to verify identities with lightning speed, Treefort incorporates advanced security features to combat fraud and establish authenticity in digital transactions.

“Securing digital identities is our passion, and Treefort’s PCTF certification is a testament to our unwavering commitment to excellence,” said Jay Krushell, Chief Legal Officer, Treefort. “In today’s world, where the authenticity of information is routinely challenged, our certification provides unparalleled confidence, allowing legal professionals to verify identities with certainty.”

The rigorous certification journey involved a meticulous third-party evaluation under the DIACC’s stringent certification model, which aligns with internationally recognized ISO/IEC standards. DIACC Accredited Auditors left no stone unturned, assessing Treefort’s identity-proofing functions and advanced information security measures. This forward-thinking approach reinforces Treefort’s status as a trusted partner in securing digital identities and counteracting the threats of misinformation and identity fraud.

With this DIACC certification, Treefort solidifies its place among Canada’s elite identity verification providers and earns a prestigious three-year Trustmark, subject to annual surveillance audits. 

“This is exciting news for the digital landscape; Treefort’s certification exemplifies their leadership in the identity verification space and strengthens the overall trust in digital transactions across the nation,” added Joni Brennan, DIACC President. 

Organizations searching for robust identity proofing and credential management solutions can now approach Treefort with enhanced peace of mind, knowing DIACC’s rigorous independent audit process supports them. In a climate where validating the source and integrity of information is paramount, Treefort stands out as a reliable source of verified identities, cutting through the noise of disinformation.

The PCTF framework is your go-to for risk management and assurance. It validates the reliability of private-sector digital trust services by addressing critical aspects such as privacy, security, and interoperability, helping to combat fraud and breaches at every turn.

For more information about how to become a certified provider and the incredible benefits of PCTF certification, reach out to voila@diacc.ca  

About Treefort
Treefort is leading the charge in Identity Verification solutions, tirelessly working to provide innovative and secure services to businesses across Canada. Our goal is to empower organizations with the tools they need to combat fraud and ensure compliance, forging a safer digital landscape for all. For more information, please visit https://treeforttech.com/ 

About DIACC
Founded in 2012, DIACC is a non-profit organization that unites public and private sector members to enhance participation in the global digital economy by leveraging digital trust services. By promoting vital design principles and PCTF adoption, DIACC champions privacy, security, and people-first design approaches. For more information, please visit https://diacc.ca.


Digital Identity NZ

Welcome to DINZ 2025!

Ngā mihi nui kia koutou katoa, warm greetings to you. I do hope you’ve had some R&R over the holiday break. However long it was, it’s never long enough! DINZ has powered into the New Year early with its traditional Summer Series, which started last week to take advantage of global standards expert and my … Continue reading "Welcome to DINZ 2025!" The post Welcome to DINZ 2025! appeared first

Ngā mihi nui kia koutou katoa, warm greetings to you.


I do hope you’ve had some R&R over the holiday break. However long it was, it’s never long enough!

DINZ has powered into the New Year early with its traditional Summer Series, which started last week to take advantage of global standards expert and my longtime friend Andrew Hughes being in the country for ISO SC37 (Biometrics) meetings. Despite holidays we had a fantastic turnout for ‘Deepfakes & ID verification: Your standards survival kit for the modern age’. View the video recording here.   


Next in the Summer Series is ‘Payments for the Next Generation’, led by DINZ member Payments NZ. This session aims to inform and seek feedback from the DINZ community on the digital identity component of its strategic paper currently out for consultation


The following months will see sessions led by other DINZ members relating to the Digital Identity Services Trust Framework and the digital trust ecosystem more broadly. Stay tuned for further announcements. Speaking of members, it’s great to welcome recent new members Cianaa Technologies and SecYour. Among other things Cianaa is in the business of evaluating service providers under the DISTF and SecYour is in the business of providing identity services. Just coincidental; no connection between them is implied! 


Don’t forget DINZ’s monthly virtual Coffee Chat series starts next week. Last year’s registrants were given priority and are re-registering. So don’t delay, register now.


The year-end brings a flurry of public sector announcements and this year was no exception. First up was the NZ Banking Association’s announcement of the launch of GetVerified – the name for the confirmation of payee service that banks will be progressively rolling out. I’ve selected this post to give you the contextual low-down. Then just before the break, the Office of the Privacy Commissioner announced the long expected Code of Practice for Biometrics. As you’ll see from our submission page, DINZ has consistently argued for detailed guidance first, with regulation for exceptions or repeated poor implementation, because regulation creates some negative effects with unintended consequences. Nonetheless, we are where we are and DINZ will help members however it can.   


On the international front NHI (Non Human Identity, once termed NPE ‘Non Person Entity’) is happily returning to ‘top of mind’ with this from OWASPRomek also covered this in his weekly email and my long-time fellow Identerati travellers Mike Schwartz and Heather Flanagan shared this very interesting post (note that with her IDPro hat on, Heather interviewed DINZ Exec Councillor Abhi Bandopadhyay). Again, I’ve chosen a link that I think provides more colour and flavour to the discussion. And with matters raised here very much in mind, take a look at this announcement from DINZ liaison member OWF as we progress towards the pointy end of digital wallet development (is it ‘a thing’ long term though?).


Last but not least, DINZ Chair 2022-2024 Paul Platen drew key statistics from this SC Media article. Very poignant, posing the question what Aotearoa’s equivalent numbers would be. Take a look at Paul’s post here.


I’m looking forward to the year ahead, with projects in DINZ Working Groups and Special Interest Groups, and sharing co-created papers on barriers to Fintech innovation and competition that we are undertaking in collaboration with relevant public and private sector bodies. With Minister Bayly’s oversight I think this will lead to positive change. All up, I really do think that 2025 can be the year we make Digital Trust real. 

Read the full news here: Welcome to DINZ 2025!

SUBSCRIBE FOR MORE

The post Welcome to DINZ 2025! appeared first on Digital Identity New Zealand.

Monday, 27. January 2025

Digital Identity NZ

Digital Identity Services Trust Framework

Learn more about the Trust Framework for Digital Identity in New Zealand – building trust in digital identity services in New Zealand. Visit the Department of Internal Affairs website. The post Digital Identity Services Trust Framework appeared first on Digital Identity New Zealand.

Learn more about the Trust Framework for Digital Identity in New Zealand – building trust in digital identity services in New Zealand.

Visit the Department of Internal Affairs website.

The post Digital Identity Services Trust Framework appeared first on Digital Identity New Zealand.

Friday, 24. January 2025

FIDO Alliance

PayPal Newsroom: Solving the Convenience and Security Equation

PayPal has remained at the forefront of the digital payment revolution for more than 25 years by creating innovative experiences that empower over 400 million consumers and merchants to move […]

PayPal has remained at the forefront of the digital payment revolution for more than 25 years by creating innovative experiences that empower over 400 million consumers and merchants to move money easily and securely.

Safety is a cornerstone of our global operations, and we are committed to protecting our users across the approximately 200 markets that we serve. In this piece, we detail the latest developments in authentication security and share recommendations for policymakers to enable increased safety in the digital economy.


Internet Safety Labs (Me2B)

One Bad Apple—Automatically Opting Users into AI Training

Last week Apple default-enabled all apps (native and 3rd party) to Siri “learning”. I thought I’d wrap up with a reflection on Apple’s action and whether the concern was warranted.   The initial reaction for most of us was, “Oh great. Yet another forced offering to the great AI gods. No thank you!” I and others […] The post One Bad Apple—Automatically Opting Users into AI Training appe

Last week Apple default-enabled all apps (native and 3rd party) to Siri “learning”. I thought I’d wrap up with a reflection on Apple’s action and whether the concern was warranted.  

The initial reaction for most of us was, “Oh great. Yet another forced offering to the great AI gods. No thank you!” I and others had a strong kneejerk reaction of “this is NOT ok.” Was the initial reaction warranted? Figure 1 shows my original LinkedIn posts below, which were updated real time as I continued to explore the situation.

Figure 1

When I took a closer look, Apple clearly tries to make Siri as “edge-y” as possible—i.e. executing independently on the device to the extent possible. But how safe is the architecture? What data exactly gets shared and with what parts of Apple’s infrastructure? This is the problem with all the large platforms: we just can’t observe server-to-server behaviors within the infrastructure.  

Here’s what I do know. Apple’s action was surprisingly presumptuous and disrespectful to their users. It was markedly off-brand for the privacy-evangelizing company, and their actions (or inactions, as the case may be) since the time this flared up are telling.   

After the dust has settled, I stand by my recommendation to disable Siri learning, and I’m less concerned about Siri suggestions. Here’s why. 

Apple did this in a sneaky way. Not at all on-brand for a privacy-touting company.   It’s at least the second time Apple behaved in this way in recent days. On January 3, 2025, The Register reported that Apple auto-opted everyone into AI analysis of their photos https://www.theregister.com/2025/01/03/apple_enhanced_visual_search/   You may have missed it, but on January 8, 2025 Apple issued a press release pretty much saying, “Siri’s super private—hooray!” This is most likely when the auto-opt in happened, though I have no supporting evidence. https://www.apple.com/newsroom/2025/01/our-longstanding-privacy-commitment-with-siri/ Coincidence? Hard to think so. Also the language in the press release raises more questions than answers:

“Siri Uses On-Device Processing Where Possible” Great. We’d like to know more about that.  “…the audio of user requests is processed entirely on device using the Neural Engine, unless a user chooses to share it with Apple.” Like, if they toggle a setting for Siri Learning to “enabled”, for example? Like that kind of “choosing” to share it with Apple?   “Apple Minimizes the Amount of Data Collected for Siri Requests” Ok, but it’s not zero. Goes on to say, “certain features require real-time input from Apple servers.” Careful wording here. I’d like more info on the data that gets sent to the Apple servers, and how it’s used. The press release segues right into this: “a random identifier  – a long string of letters and numbers associated with a single device…is used to keep track of data while it’s being processed, rather than tying it to a  user’s identity through their Apple Account or phone number”. I find that phrasing curious. Note that it says, “associated with a single device” and NOT “associated with a single device for a single query”. Is the identifier long-lived? Is the identifier in fact some kind of internal Apple universal identifier for you—a join key assembling an ever-growing biographical log of you [like virtually everyone else in martech and edtech]? Any kind of permanent, unique identifier is risky and don’t let anyone tell you otherwise. I really hope I’m wrong on this.   “Private Cloud Compute” This sounds promising but we’re going to need some more info about this.   “…many of the models that power Apple Intelligence run entirely on device.” But apparently some don’t. Again, going to need more details here.  It was obnoxiously difficult to unwind this default opt-in. You couldn’t just go to Settings -> Apps, you also had to go to Settings -> Siri to get all the apps. Think about this. This is Apple, the company who (according to many) was the epitome of usable design. I think about Jony Ive and Steve Jobs—would they stand behind this kind of user interaction? Siri learning was enabled for Every. Single. App. Including native Passwords, Phone, Apple Home, and Health Data. I get that there are legitimate reasons why Siri learning should be connected for every app—e.g. people who must use voice as their primary mode of device interaction. I’m surprised, however, that they didn’t think it might be a tad risky to be auto-enabled for things like Passwords for everyone. It feels weirdly clumsy for a company not known for clumsiness. (Unless you’re building a training set for an LLM in which case it makes complete sense.)  NB: I still can’t find a way to disable it from Calendar. Disabling Siri learning and suggestions has done some weird things to non-voice interactions on my phone which makes me wonder just how interwoven Siri is within all modes of user interaction, beyond just voice interaction. Example: voice note recorder stopped showing text on the record button and it no longer supports “resume [recording]”; it now just ends the recording and I need to start a new one if I want to extend the recording with a new idea. I thought Siri was just supposed to be about voice interaction? Apparently not. (Note: upon re-enabling Siri Learning and Siri suggestions, the text under the button and the resume function re-appeared. Huh.) Apple Magazine’s (November 25, 2024) piece on the forthcoming Siri revamp reinforces the fact that the boundaries of what is and isn’t “Siri” are hazy indeed (Figure 2). The story affirms that the Siri update touches multimodal interaction capabilities.

Figure 2: Source https://applemagazine.com/siri-engine-revamp-what-apples-next/

One starts to wonder, what isn’t Siri when it comes to user interface and interaction? And also wondering where Siri begins and ends in terms of software execution and data access. This year is already touted as the year for agentic AI—what could be more agentic than AI-infused Siri?

For me, the biggest smoking gun is that late last year Apple announced that Siri will be powered by Apple’s LLM (https://superchargednews.com/2024/11/21/llm-siri-to-launch-by-spring-2026/ ). First off “powering Siri” with Apple’s LLM already sets off some alarm bells. The timing of this forced opting–in to Siri Learning seems quite aligned with the development timeline of an LLM said to be launching in late 2025/2026. “The new Siri will be powered by Apple’s advanced Large Language Models (LLM), which will make the digital assistant more conversational and ChatGPT-like.” I can’t really imagine a world where Apple wouldn’t train their LLM off their current customer base. Siri Architecture: Because I love architecture and because I can remember when Siri was a baby 3rd party app, I wanted to go back and take a brief look at its evolving architecture. As of 2017, the Siri architecture was a poster child of typical app client-server architecture and the reliance on the server is clear (Figure 2), with even the trigger words audio being sent to the server.

Figure 3: Source https://machinelearning.apple.com/research/hey-siri    

2017 is, of course, ancient times in developer years and Apple has been relatively transparent about how they’ve been rearchitecting Siri to [at least] keep the trigger word detection on the device (https://machinelearning.apple.com/research/voice-trigger).

The last thing I want to mention is that Apple has been preternaturally silent about this whole thing. I find that remarkably off-brand. Unless Siri and Apple AI are inextricably interwoven and this was actually a training set creation exercise, in which case, probably best to keep silent.

Is Apple training their LLM via the forced Siri Learning opt-in? Maybe. Will be good to hear from them on this. And while we’re at it, I’d love a new “revamped” Siri and Apple LLM architecture diagram/document, with greater transparency and detailed information on the functionality distribution and data sharing between the device and back-end servers and services. Please and thank you.  

The post One Bad Apple—Automatically Opting Users into AI Training appeared first on Internet Safety Labs.

Thursday, 23. January 2025

Velocity Network

Dr. Henry Mack joins Velocity’s board

We're delighted that National Student Clearinghouse's Chris Goodson has been voted onto the Velocity Network Foundation Board of Directors. The post Dr. Henry Mack joins Velocity’s board appeared first on Velocity.

Wednesday, 22. January 2025

Next Level Supply Chain Podcast with GS1

Breaking Barriers: How DSCSA Transforms Healthcare Supply Chains

Traceability and supply chain integrity are more than just buzzwords—they’re the backbone of patient safety and industry innovation.  In this episode, host Reid Jackson welcomes Gary Lerner, Founder and CEO of Gateway Checker, to explore the transformative power of the Drug Supply Chain Security Act (DSCSA). They discuss how the shift from lot-level to item-level traceability is revolutio

Traceability and supply chain integrity are more than just buzzwords—they’re the backbone of patient safety and industry innovation. 

In this episode, host Reid Jackson welcomes Gary Lerner, Founder and CEO of Gateway Checker, to explore the transformative power of the Drug Supply Chain Security Act (DSCSA).

They discuss how the shift from lot-level to item-level traceability is revolutionizing healthcare, providing unprecedented safeguards against counterfeiting and channel diversion. From the mechanics of 2D barcodes to the role of AI in analyzing supply chain data, Gary shares practical insights from his 20+ years of experience navigating the intersection of digital and physical supply chains.

 

In this episode, you’ll learn:

How DSCSA is transforming healthcare supply chains with item-level traceability

The critical role of 2D barcodes in ensuring authenticity and patient safety

Why data quality and interoperability are the next big steps for supply chain efficiency

 

Jump into the conversation:

(00:00) Introducing Next Level Supply Chain

(02:18) From brand protection to healthcare supply chains

(03:37) Innovating item-level traceability in goods

(05:15) The importance of supply chain integrity in healthcare

(06:48) Breaking down the “license plate” of pharmaceuticals

(09:23) Understanding DSCSA and its impact on patient safety

(12:38) How bad actors exploit supply chain gaps

(13:49) Counterfeit prevention as part of national security

(16:10) Using interconnectivity to uncover supply chain risks

(19:16) Best practices for adapting to DSCSA regulations

(21:09) How serialization enables better inventory accuracy

(26:05) The future of supply chain integrity and AI innovation

 

Connect with GS1 US:

Our website - www.gs1us.org

GS1 US on LinkedIn

 

Connect with Guest:

Gary Lerner on LinkedIn

Gateway Checker

Tuesday, 21. January 2025

FIDO Alliance

Analytics Insight: Revolutionizing Digital Security: The Rise of Passkeys

The Core of Passkey Technology Passkeys, a breakthrough in the realm of digital security, eliminate the vulnerabilities of password-based systems. Utilizing cryptographic key pairs, passkeys are designed to safeguard user […]

The Core of Passkey Technology

Passkeys, a breakthrough in the realm of digital security, eliminate the vulnerabilities of password-based systems. Utilizing cryptographic key pairs, passkeys are designed to safeguard user identities without relying on shared secrets. The system operates on a challenge-response mechanism: a private key stored securely on the user’s device interacts with a public key on the service provider’s server. This interaction ensures that sensitive credentials are never exposed, making passkeys inherently resistant to phishing attempts and credential theft.

This technology is underpinned by the FIDO2 standard, which comprises WebAuthn and the Client-to-Authenticator Protocol (CTAP). WebAuthn facilitates seamless integration of passkeys into web applications, while CTAP supports communication between devices and authenticators, ensuring flexibility and security. Together, these components offer a standardized and robust framework for passwordless authentication across various platforms.


TechRadar: Passwords out, passkeys in: The future of secure authentication

Since the inception of the internet, passwords have been the primary authentication factor to gain access to online accounts. Yubico’s recent Global State of Authentication survey of 20,000 employees found that 58 percent still […]

Since the inception of the internet, passwords have been the primary authentication factor to gain access to online accounts. Yubico’s recent Global State of Authentication survey of 20,000 employees found that 58 percent still use a username and password to login to personal accounts, with 54 percent using this login method to access work accounts.

This is despite the fact that 80 percent of breaches today are a result of stolen login credentials from attacks like phishing. Because of this, passwords are widely understood by security experts as the most insecure authentication method that leaves individuals, organizations and their employees around the world vulnerable to increasingly sophisticated modern cyber attacks like phishing.


MobileIDWorld: Research Reveals Security Implications of FIDO2 and Synced Passkeys

Recent academic research has revealed new insights into the security considerations surrounding FIDO2 authentication and synced passkeys, highlighting both the strengths and potential vulnerabilities of current authentication systems. The analysis […]

Recent academic research has revealed new insights into the security considerations surrounding FIDO2 authentication and synced passkeys, highlighting both the strengths and potential vulnerabilities of current authentication systems. The analysis comes at a time when major technology companies are increasingly adopting passkey technology, with Microsoft reporting login times three times faster than traditional passwords.

Formal methods analysis of the FIDO2 standard has revealed potential weaknesses in the underlying protocols that warrant attention from security professionals. The research particularly focuses on the implementation of synced passkeys, which enable cross-device access through passkey providers. These findings support recent expert warnings about interoperability concerns in FIDO2 implementations.


ZDNet: What are passkeys? How going passwordless can simplify your life in 2025

You probably have a lot of passwords in your life. Even with the help of password managers, passwords are becoming more and more of a burden for most people. Long gone […]

You probably have a lot of passwords in your life.

Even with the help of password managers, passwords are becoming more and more of a burden for most people.

Long gone are the days of being able to use and reuse rubbish passwords like p455w0rd123. Now, all of your online accounts need to be protected by passwords that are complex and unique.

Also: Passkeys take yet another big step towards killing off passwords

You also need to be ever vigilant in case one of your many passwords is compromised.

There’s a better solution: Passkeys.


Tech Target: Adopt passkeys over passwords to improve UX, drive revenue

The digital economy continues to rely on password-based authentication, but password weaknesses — and human nature — make them horrible for security. Password use also impacts businesses’ bottom lines because […]

The digital economy continues to rely on password-based authentication, but password weaknesses — and human nature — make them horrible for security. Password use also impacts businesses’ bottom lines because every year, forgotten passwords and password resets result in millions of dollars of lost sales and wasted IT staff hours.

It’s a “password tax” on businesses and consumers that no one can seem to get past.

As the digital economy has grown, so has the value associated with passwords. As a result, phishing and credential theft continue to run rampant, with stolen credentials sold openly on the dark web.

To protect people, organizations add more friction and worsen UX. They ask users to create long and complex passwords, change passwords every few months and use MFA. This results in lost sales, reduced company productivity and added costs.

A secure alternative to the password has emerged: passkeys. This option can strengthen organizations’ security posture because passkeys have the potential to generate billions in revenue and cost savings for businesses.


Federal Register: Strengthening and Promoting Innovation in the Nation’s Cybersecurity

A Presidential Document by the Executive Office of the President on 01/17/2025 Executive Order on Strengthening and Promoting Innovation in the Nation’s Cybersecurity  The WebAuthn standard was called out by name in a new cybersecurity […]

A Presidential Document by the Executive Office of the President on 01/17/2025

Executive Order on Strengthening and Promoting Innovation in the Nation’s Cybersecurity 

The WebAuthn standard was called out by name in a new cybersecurity executive order (EO) that was released by the White House – in the final days of the Biden Administration. Among other things, the new EO effectively codifies a previous 2022 policy memo that called for the US government to use only phishing-resistant authentication. 


Insurance Business: Experts warn NZ businesses to prepare for AI-driven cyber threats

Cybersecurity experts are calling on New Zealand businesses to strengthen their defences as cyber threats grow in sophistication. Key developments, such as AI-driven phishing, the adoption of digital identity wallets, […]

Cybersecurity experts are calling on New Zealand businesses to strengthen their defences as cyber threats grow in sophistication.

Key developments, such as AI-driven phishing, the adoption of digital identity wallets, and the shift to passkey authentication, are reshaping the cybersecurity landscape. These trends, combined with rising attack frequencies, require organisations to adopt proactive measures and align with evolving regulatory standards.

Global leaders, including experts from Yubico, and local organisations like CERT NZ and the National Cyber Security Centre (NCSC), have identified critical areas of focus for 2025. These include:

combating increasingly sophisticated attacks implementing modern authentication methods prioritising board-level involvement in cybersecurity strategies


Biometric Update: State of passkeys 2025: passkeys move to mainstream

More than 1 billion people have activated at least one passkey according to the FIDO Alliance – an astonishing number that highlights the quick evolution of passkeys from a buzzword to a […]

More than 1 billion people have activated at least one passkey according to the FIDO Alliance – an astonishing number that highlights the quick evolution of passkeys from a buzzword to a trusted login method. In just two years, consumer awareness of the technology jumped from 39% to 57%. Let’s see how passkeys have moved to mainstream.


National Cyber Security Centre: Passkeys: they’re not perfect but they’re getting better

Passkeys are the future of authentication, offering enhanced security and convenience over passwords, but widespread adoption faces challenges that the NCSC is working to resolve. What’s wrong with passwords – […]

Passkeys are the future of authentication, offering enhanced security and convenience over passwords, but widespread adoption faces challenges that the NCSC is working to resolve.

What’s wrong with passwords – why do we need passkeys?

Most cyber harms that affect citizens occur through abuse of legitimate credentials. That is, attackers have obtained the victim’s password somehow – whether by phishing or exploiting the fact the passwords are weak or have been reused.

Passwords are just not a good way to authenticate users on the modern internet (and arguably weren’t suitable back in the 1970s when the internet was used by just a few academics). Adding a strong – phishing-resistant – second factor to passwords definitely helps, but not everyone does this and not every type of Multi-Factor Authentication (MFA) is strong.


GlobeNewswire: Passwordless Authentication Market to Surpass Valuation of US$ 8,944.3 Million By 2033

Growing enterprise reliance on biometric and token-based authentication propels the passwordless market forward. Providers innovate frictionless FIDO2/WebAuthn solutions, boosting collaboration between fintech, retail, and the public sector, while unresolved interoperability […]

Growing enterprise reliance on biometric and token-based authentication propels the passwordless market forward. Providers innovate frictionless FIDO2/WebAuthn solutions, boosting collaboration between fintech, retail, and the public sector, while unresolved interoperability hinders the seamless global rollout of passkey technologies.


GlobeNewswire: Expanding the API Economy and CIAM with Passkeys, Identity Verification, and Decentralized Identity

The study illustrates successful implementations of CIAM solutions across various verticals and use cases. This report’s geographic coverage is global. The study period is 2023-2029, with 2024 as the base […]

The study illustrates successful implementations of CIAM solutions across various verticals and use cases. This report’s geographic coverage is global. The study period is 2023-2029, with 2024 as the base year and 2025-2029 as the forecast period.

The report defines consumer identity and access management (CIAM) as a framework that controls and manages consumer identities, access, and policies across IT infrastructures to protect enterprises from unauthorized and potentially harmful security breaches. CIAM solutions include single sign-on, multi-factor authentication, identity verification, lifecycle management (provisioning, deprovisioning), password management, and compliance management.


HYPR Unmasks a Fake IT Worker: North Korea Isn’t the Only Threat

Highlights:

Highlights:

Fraudulent job applicants posing as IT workers from countries like North Korea have infiltrated organizations, posing significant security risks. HYPR encountered a potential fraud attempt during its onboarding process and successfully thwarted it using its Identity Assurance platform. HYPR’s use of multi-layered identity verification, including biometrics and video verification, helped prevent the fraudulent hire from gaining access to their systems. This issue is not limited to North Korea plots; fake workers and interview fraud are widespread and growing

Monday, 20. January 2025

We Are Open co-op

Towards an Open Recognition Wallet

Note: this post builds upon ideas we presented at ePIC 2024, and a couple of in-depth posts about using Open Recognition to map real-world skills and attributes. You don’t need to have read those first, but they may provide more context in case you have questions. Now that the latest version of the Open Badges specification is aligned with the Verifiable Credentials data model, there’s a new way

Note: this post builds upon ideas we presented at ePIC 2024, and a couple of in-depth posts about using Open Recognition to map real-world skills and attributes. You don’t need to have read those first, but they may provide more context in case you have questions.

Now that the latest version of the Open Badges specification is aligned with the Verifiable Credentials data model, there’s a new way to store your credentials: digital wallets. Instead of your badges or credentials being stored on someone else’s platform (usually the issuer’s) you store them on a wallet on your mobile device. In other words, you, as a learner or earner are fully in control.

This approach will be familiar to anyone who has ever paid for something using their smartphone, or used a digital ticket to attend an event or access public transport. The difference in this case is that the wallet is also a portfolio of your achievements. It’s a personal showcase of who you are, what you know, and what you can do.

Image CC BY-ND Visual Thinkery for WAO

It’s still early days for digital wallets, but now that even the UK government is getting in on the act, now is the perfect time to be thinking about what an Open Recognition Wallet might look like. It’s also a good time to consider this as the Digital Credentials Consortium hands over stewardship of the open source Learner Credential Wallet to the Open Wallet Foundation.

Open Recognition, for those who need a reminder, is the community of people who hold fast to the original vision for Open Badges, as outlined in this Mozilla white paper. These days, we define it as:

“Open Recognition is the awareness and appreciation of talents, skills and aspirations in ways that go beyond credentialing. This includes recognising the rights of individuals, communities, and territories to apply their own labels and definitions. Their frameworks may be emergent and/or implicit.” (Badge Wiki)

So what might an Open Recognition Wallet look like in practice? How might it differ from one that, for example, is to store your passport, driving license, or payment cards?

Core Principles

This post is a conversation-starter for the community. Perhaps, based on the original Open Badges vision, and our evolving understanding of Open Recognition, a wallet should:

Recognise learning across all contexts, not just formal education Empower individuals to control and showcase their diverse achievements Be inclusive of often-marginalised and excluded groups Support lifewide and lifelong learning pathways Enable recognition from multiple sources, including self-issued and community endorsements Image CC BY-ND Visual Thinkery for WAO Key Features

E-portfolios and badge backpacks have been around for a while now, so how might an Open Recognition Wallet build on these? Here are some ideas:

1. Flexible Recognition Capture Support for self-issued badges Mechanisms for community endorsement Semi-automated recognition and achievement tracking 2. Community-Driven Endorsement Semi-automated peer recognition Transparent endorsement processes Storytelling functionality by adding contextual info/evidence 3. Learning Analytics Integration Privacy-preserving, on-device data analysis Insights into potential future directions (life/career) Skills alignment with multiple taxonomies/frameworks 4. Social and Engaging Design Ability to customise both interface and look of portfolio(s) Community-building features to encourage solidarity Storytelling capabilities for achievements 5. Technical Foundations Based on the latest version of the Open Badges specification Available on every platform (potentially a webapp?) User-controlled, granular data sharing A Multiplicity of Wallets

While it’s entirely possible to use the same digital wallet for Open Recognition as for passports, driving licenses, and other identity documents, it probably isn’t likely. As with badge platforms, various entities will build wallets that suit their customers or communities. Wallets that become mainstream will likely come from Big Tech or governments, though nonprofit and educational institutions will continue to innovate in the space.

In the same way we use multiple social networks to reflect different aspects of our personality and to connect with various networks, I can imagine multiple digital wallets that showcase nuanced portfolios of skills and attributes, depending on the context.

Image CC BY-ND Visual Thinkery for WAO Differentiating Factors

I’d love to see the Open Recognition community’s thoughts on this. For me, the main differentiating factors for an Open Recognition wallet are:

a) A focus on holistic recognition

The original Open Badges vision helped support the concept of Connected Learning. This is an approach which is participatory, learner centred, interest-driven, and inclusive. While usually focused on young people, there is no age limit to interest. No upper bounds on inclusivity.

By recognising all of the contributions we make to the world, all of the positive behaviours we exhibit, we can create a much more holistic, three-dimensional view of who we are and what drives us. This is not merely so that we can provide better data points for employers looking for labour, but so that we can “find the others”, create communities, build solidarity, and make the world a better place.

b) Celebration of diverse learning experiences

Learning happens everywhere, but it’s only usually captured in formal, pre-determined ways. Capturing important smaller, more granular, and diverse learning experiences is important. That might be related to cultural awareness, to privilege and power, or just how to fix your washing machine by watching a YouTube video.

Not every learning experience will be captured, but it’s important that they can be, and in terms that chime with the Open Recognition exhortation to allow people to use their own labels and definitions. After-the-fact alignment with pre-existing skills taxonomies is less important than validating knowledge, skills, and behaviours in ways that make sense to the learner.

c) Community-driven validation

One of the most exciting things about the latest version of Open Badges is that it allows you to ‘endorse’ a learner’s credential after it’s been issued. One way of thinking about this is as an evidence-based recommendation system where the value of the credential increases over time.

For example, I might make a New Year’s Resolution to be more helpful to a particular community. I could self-issue the ‘Community Helper’ badge, and this would serve as a repository for community members’ endorsements of my assistance. A similar idea is explored in these blog posts. A digital wallet that can explicitly show the value that an individual is bringing to a community is incredibly powerful and valuable.

Next steps

I know that Nate Otto is building something which feels close to what’s described above. The digital wallet side of things, however, is not handled by the ORCA platform, but rather left to the user to decide. I’m all for user choice, but I would love to help bring into being a default and muched-loved place for Open Recognition badges to live.

If you’re part of the ORE community, why not bring your thoughts on this to our next community call. And if you’re not, but this kind of thing interests you, why don’t you join us?

Towards an Open Recognition Wallet was originally published in We Are Open Co-op on Medium, where people are continuing the conversation by highlighting and responding to this story.

Friday, 17. January 2025

Origin Trail

2025 Roadmap update: Synergy of AI agents and autonomous DKG

In 2024, the OriginTrail ecosystem achieved remarkable milestones, driving innovation in decentralized knowledge and AI integration. The three stages (or impact bases) of the V8 Foundation were formulated as inspired by the legendary works of Isaac Asimov. They prophetically symbolize steps towards the future where the reliable and trusted knowledge base or Collective neuro-symbolic AI drives syne

In 2024, the OriginTrail ecosystem achieved remarkable milestones, driving innovation in decentralized knowledge and AI integration. The three stages (or impact bases) of the V8 Foundation were formulated as inspired by the legendary works of Isaac Asimov. They prophetically symbolize steps towards the future where the reliable and trusted knowledge base or Collective neuro-symbolic AI drives synergies between AI agents and the Autonomous Decentralized Knowledge Graph (DKG) in a human-centric way.

The updated roadmap recaps the most important achievements of the past year, and highlights the road ahead (full updated roadmap available here).

The year kicked off with the establishment of the Impact Base: Trantor (home to the Library of Trantor, where librarians systematically indexed human knowledge in a groundbreaking collaborative effort), which catalyzed key advancements, including Knowledge Mining, which introduced Initial Paranet Offerings (IPOs) and autonomous knowledge mining initiatives. Simultaneously, the release of delegated staking enabled TRAC delegation for network utility and security, enhancing inclusivity and participation in DKG infrastructure.

Following Trantor, Impact Base: Terminus was activated with key catalysts for adoption, including multichain growth, integrating DKG with the Base blockchain ecosystem, and implementing transformative scalability solutions such as asynchronous backing on NeuroWebAI blockchain on Polkadot and batch minting features.

The introduction of ChatDKG.ai revolutionized interaction with DKG and paranets, integrating AI models across platforms like Google Vertex AI, OpenAI, and NVIDIA. Meanwhile, the release of Whitepaper 3.0 outlined the vision of a Verifiable Internet for AI, bridging crypto, Web3, and AI technologies to address misinformation and data integrity challenges.

The deployment of OriginTrail V8 and its Edge Nodes brought Internet-scale to the ecosystem. Edge Nodes redefine how sensitive data interacts with AI-driven applications, keeping it on devices while enabling controlled integration with both the DKG and neural networks. This privacy-first architecture facilitates local AI processing, ensuring secure utilization of private and public knowledge assets. In addition, OriginTrail V8 achieves monumental scalability improvements with the random sampling proof system that reduces on-chain transaction requirements by orders of magnitude thus boosting the DKG’s throughput in a major way.

The DKG V8 provides a powerful substrate to drive synergies between AI and collective-neuro symbolic AI capable of driving AI agents’ autonomous memories and trusted intents, as both AI agents and robots alike become potent enough to act on behalf of humans.

Roadmap for 2025 and beyond: Advancing collective neuro-symbolic AI with the DKG

The 2025 roadmap marks a leap forward for the OriginTrail ecosystem, as the Decentralized Knowledge Graph (DKG) becomes the cornerstone for collective neuro-symbolic AI, a powerful fusion of neural and symbolic AI systems.

With the establishment of Impact Base: Gaia, the roadmap envisions the system functioning as a super-organism, where decentralized AI agent swarms share and expand their collective memory using the DKG. This shared memory infrastructure, combined with the autonomous inferencing and knowledge publishing capabilities of DKG V8, lays the foundation for decentralized AI that seamlessly integrates neural network and knowledge graph reasoning with trusted, verifiable knowledge. The result is a robust AI infrastructure capable of addressing humanity’s most pressing challenges at an accelerated pace.

At the heart of this vision lies the Collective Agentic Memory Framework, enabling autonomous AI agents to mine, publish, and infer new knowledge while ensuring privacy and scalability. This vision is enabled by establishing scalable infrastructure and tools such as AI agent framework integrations (such as the ElizaOS DKG integration), the NeuroWeb Collator staking and bridge, and DKG Edge node private knowledge repositories.

Those who invest in using the DKG, build the DKG: 60MM TRAC Collective Programmatic Treasury (CPT)

The roadmap also introduces decentralized growth through initiatives like the Collective Programmatic Treasury (CPT), allocating 60 million $TRAC over a Bitcoin-like schedule to incentivise an ecosystem of DKG developers based on a meritocratic system of knowledge contribution.

The quote taken from The Matrix Ending

As adoption spreads across industries such as DeSci, robotics, healthcare, and entertainment, this interconnected ecosystem drives network effects of shared knowledge, exponentially amplifying the collective intelligence of AI agents. By aligning decentralized AI efforts with the DKG’s unifying framework, OriginTrail unlocks the potential for Artificial General Intelligence (AGI) through the synergy of all human knowledge, creating a future where AI reflects the full spectrum of human insight and wisdom.

Impact base: Gaia (established in Q1 2025)

The human beings on Gaia, under robotic guidance, not only evolved their ability to form an ongoing telepathic group consciousness but also extended this consciousness to the fauna and flora of the planet itself, even including inanimate matter. As a result, the entire planet became a super-organism.

DKG V8

Scalable and robust foundation for enabling next stage of Artificial Intelligence adoption with decentralized Retrieval Augmented Generation (dRAG), combining symbolic and neural decentralized AI. DKG V8 is catalysing the shift from attention economy to intention economy.

✅ DKG Edge Nodes

✅ New V8 Staking dashboard

✅ New V8 DKG Explorer

✅ Batch minting (scalability)

Random sampling (scalability) Collective Agentic memory framework Eliza integration (Github) ChatDKG Framework for AI Agent Autonomous Memory NeuroWeb Bridge integration NeuroWeb Collators RFC-23 Multichain TRAC liquidity for DKG utility C2PA global content provenance standard compliance

Catalyst 1: Autonomous Knowledge Mining

Mine new knowledge for paranets autonomously by using the power of symbolic AI (the DKG) and neural networks.

AI-agent driven Knowledge Mining

Catalyst 2: DePIN for private knowledge

Keep your knowledge private, on your devices, while being able to use it in the bleeding edge AI solutions.

Private Knowledge Asset repository for agents (DKG Edge Node) Private data monetization with Knowledge Assets and DPROD Convergence (2025 +)

With the Genesis period completed the OriginTrail DKG will have a large enough number of Knowledge Assets created (1B) to kickstart the “Convergence”. Leveraging network effects, growth gets further accelerated through autonomous knowledge publishing and inferencing capabilities of the DKG, fueled by decentralized Knowledge Mining protocols of NeuroWeb and AI Agents supported by multiple frameworks integrating the DKG. During the Convergence, supported by OriginTrail V8 with AI-native features and further scalability increase, the OriginTrail DKG grows the largest public Decentralized Knowledge Graph in existence, a verifiable web of collective human knowledge — the trusted knowledge foundation for AI.

Collective Neuro-Symbolic AI (DKG)

Collective Global memory: Autonomous Decentralized Knowledge Graph

Incentivized autonomous enrichment of human knowledge using neural network reasoning capabilities over a large body of trusted knowledge. Providing AI infrastructure that allows any of the most pressing challenges of human existence to be addressed in an accelerated way.

Future development fund decentralization “Those invest in the DKG, shall build the DKG” — 60,000,000 $TRAC allocated using the Bitcoin schedule over X years with the Collective Programmatic Treasury (CPT)

Autonomous Decentralized Knowledge Inferencing

Knowledge graph reasoning Graph neural network framework Neuro-symbolic inferencing combining GenAI with symbolic AI

Autonomous Knowledge Mining

Autonomous knowledge publishing with DKG inferencing Additional AI-agent integrations

Extending DKG-powered AI Agents to physical world through robotics

Collective Neuro-Symbolic AI (DKG) adoption 2025 + Autonomous AI agents Decentralized science (DeSci) Robotics and manufacturing (DePin) Financial industry Autonomous supply chains supported by Global Standards Construction Life sciences and healthcare Collaboration with internationally recognized pan-European AI network of excellence (EU supported) Metaverse and entertainment Doubling down on OriginTrail ecosystem inclusivity Activating the Collective Programmatic Treasury Driving safe Internet in the age of AI inclusively with the leading entities in the industry

*The list is non exhaustive

👇 More about OriginTrail 👇

Web | Twitter | Facebook | Telegram | LinkedIn | GitHubDiscord

2025 Roadmap update: Synergy of AI agents and autonomous DKG was originally published in OriginTrail on Medium, where people are continuing the conversation by highlighting and responding to this story.


MyData

2024 in Review: Investing in community, building for impact

Author: Christopher Wilson, Executive Director of MyData Global 2024 was my second year with MyData Global, and it was a pivotal one, marked by investments in our long-term sustainability and […]
Author: Christopher Wilson, Executive Director of MyData Global 2024 was my second year with MyData Global, and it was a pivotal one, marked by investments in our long-term sustainability and […]

Wednesday, 15. January 2025

The Engine Room

Are you looking to foster meaningful connectivity in your community in 2025? Join our Digital Resilience Hub online event

Join our community event introducing the Digital Resilience Hub on 21 January 2025. The post Are you looking to foster meaningful connectivity in your community in 2025? Join our Digital Resilience Hub online event appeared first on The Engine Room.

Join our community event introducing the Digital Resilience Hub on 21 January 2025.

The post Are you looking to foster meaningful connectivity in your community in 2025? Join our Digital Resilience Hub online event appeared first on The Engine Room.


Elastos Foundation

BeL2 Lending Demo: Arbiter Integration and One-Click Deployment

BeL2’s V1 system Beta, officially released on January 6th, 2025, marks the milestone of Elastos’ commitment to transforming Bitcoin’s utility. Its architecture incorporates: BTC Locking Scripts: Non-custodial mechanisms that keep Bitcoin securely locked on it’s mainnet, enabling it to serve as collateral. Zero-Knowledge Proofs (ZKPs): Cutting-edge cryptographic proofs that allow smart contracts to

BeL2’s V1 system Beta, officially released on January 6th, 2025, marks the milestone of Elastos’ commitment to transforming Bitcoin’s utility. Its architecture incorporates:

BTC Locking Scripts: Non-custodial mechanisms that keep Bitcoin securely locked on it’s mainnet, enabling it to serve as collateral. Zero-Knowledge Proofs (ZKPs): Cutting-edge cryptographic proofs that allow smart contracts to verify Bitcoin-locking transactions without exposing sensitive details or bridging assets. Oracle Services: A trustless bridge that relays Bitcoin proofs (ZKPs) to EVM-compatible blockchains, allowing BTC and other ecosystems to talk together. Arbiter Network: Decentralized nodes that oversee transaction integrity, manage dispute resolution, and facilitate time-sensitive operations through staking mechanisms​​.

Together, these components form a decentralized clearing network service, transforming Bitcoin into a programmable asset without compromising its integrity. BeL2 is not a DApp—it’s infrastructure for building an Elastos ecosystem of “killer DApps” that leverage Bitcoin’s unmatched security and liquidity, such as BTC-Backed Loans, Native Bitcoin Stablecoins and Decentralized Exchanges and Swap services.

The BeL2 protocol has reached a significant milestone today with the integration of Arbiter nodes into the BeL2 Loan DApp, further supporting the official release of the BeL2 V1 Beta system. Alongside this, we’ve introduced a one-click deployment solution including an easy-to-follow video for Arbiter nodes, simplifying participation in the ecosystem. So, let’s jump in!

 

Lending DApp connectivity into BeL2 V1 Beta explained

The BeL2 Lending DApp is a decentralized financial application enabling native Bitcoin (BTC) holders to secure USDC loans on Elastos while keeping their BTC on the Bitcoin mainnet, without wrapping or bridging assets. Central to its operation is the Arbiter Network, decentralised nodes which ensures trustless dispute resolution and transaction oversight.

Today, when users take out loans, they pay ELA fees to compensate arbiters for their services. The fee covers the maximum loan period and is calculated based on the arbiter’s stake and fee rate. If the loan is repaid early, unused ELA is refunded proportionally. For example, if 100 ELA is locked for six months but the loan is closed in one day, 99.5 ELA is refunded. This system ensures fair compensation for arbiters while maintaining cost-efficiency for users. Currently, arbiters earn rewards exclusively in ELA, further incentivizing participation in the network, with BTC planned at a later date.

 

Simplifying Participation: One-Click Arbiter Deployment

Arbiter nodes are the backbone of BeL2’s governance and dispute resolution, receiving fees in return for services. The new one-click deployment solution simplifies their setup, making it easier for community members to join the network. Existing Elastos BPoS nodes can also run Arbiter nodes, further lowering entry barriers and encouraging widespread participation. For an understanding on this simplicity, please see:

One-Click Documentation One-Click GitHub One-Click Video Support

 

 

BeL2 V1 Beta Phase: A Call for Community Engagement

During this beta phase, we invite more community members to explore and test the system. A beta phase indicates the product is functional but still under active development and refinement. While core features are operational, limitations and occasional bugs may exist. User feedback is essential to refining the system and accelerating adoption. You can help by doing the following:

Borrow or lend on the BeL2 Lending Demo DApp. Set up Arbiter nodes via the Arbiter Portal and use the one-click node deployment guide.

Be sure to understand Arbiter risks, noting the beta release operates with reduced staking limits to maintain stability while gathering valuable feedback. A proposal to the Cyber Republic Consensus (CRC) will soon outline a rewards program to further encourage participation. This includes:

Incentive Structures: Providing additional rewards in BTC and ELA for borrowers, lenders, and Arbiters​​. Gas Abstraction with Particle Network: Eliminating ESC gas fees for borrowers to simplify user interactions.

 

A Symbiotic Relationship: Bitcoin and Elastos

As many know, BeL2 leverages the historical connection between Bitcoin and Elastos through merge mining. Sharing over 50% of Bitcoin’s hash power, Elastos benefits from Bitcoin’s robust Proof of Work (PoW) security.

This relationship underscores the ethos of BeL2: harnessing Bitcoin’s resilience to power decentralized, scalable financial applications​​. BeL2 continued innovation by representing a paradigm shift for Bitcoin and decentralized finance:

True Decentralization: Avoid reliance on custodians or wrapped assets. Financial Sovereignty: Retain control of BTC while accessing liquidity and programmable financial tools. A Framework for Innovation: BeL2 creates a foundation for developers to build scalable, secure applications on Bitcoin​​.

The BeL2 protocol is poised to redefine what’s possible with Bitcoin. Whether you’re a borrower, lender, or Arbiter, now is the time to explore, engage, and help shape the future of decentralized finance!

 


Digital ID for Canadians

Spotlight on Canchek

1. What is the mission and vision of Canchek? To provide the Canadian investment industry with an economical and reliable method of legally identifying clients…

1. What is the mission and vision of Canchek?

To provide the Canadian investment industry with an economical and reliable method of legally identifying clients they do not meet in person. The service must be (and is) compliant with the guidelines of FINTRAC and the various provincial law societies.

2. Why is trustworthy digital identity critical for existing and emerging markets?

Canchek primarily provides Canadian investment firms with anti-money laundering solutions. FINTRAC requires that firms use technology to identify clients that they do not meet in person, and Canchek introduced Canchek-eID as a convenience for its over 250 customers in Canada during the recent pandemic. The service has become an essential component of their client onboarding process.

3. How will digital identity transform the Canadian and global economy? How does your organization address challenges associated with this transformation?

The main benefit of this technology in our market space is streamlining the onboarding process of new clients for our customers. Our main challenge is balancing the power of the technology with consumer rights to privacy and ensuring continuing compliance with federal and provincial legislation in that regard. Besides our own due diligence, we often work with the legal departments of our customers to inform them of our security measures and compliance with the regulations they are subject to.

4. What role does Canada have to play as a leader in this space?

Canada is a trusted international player with strong privacy laws and a respected IT industry that can promote greater efficiency in international transactions.

5. Why did your organization join the DIACC?

To help us keep abreast of Canadian federal and provincial (e.g. Quebec Rule 25) legislation that affects the delivery of the Canchek-eID verification service. We also wanted to increase awareness of this service in the marketplace.

6. What else should we know about your organization?

Canchek began business in 2016 with a group of entrepreneurs well known in the Capital Markets industry, especially with regard to technology innovation. Canchek’s primary business is providing anti-money laundering and anti-terrorist financing services to Canadian financial firms. Since the company’s inception, a wide variety of customers, ranging from small to large national and internationally known firms, now use our AML technology. The electronic ID verification service represents approximately five percent of our revenue.


MyData

MyData, My Choice – How my:D is Redefining Ethical Data Use

In the MyData Matters blog series, MyData members introduce innovative solutions that align with MyData principles, emphasising ethical data practices, user and business empowerment, and privacy. SNPLab’s my:D app is […]
In the MyData Matters blog series, MyData members introduce innovative solutions that align with MyData principles, emphasising ethical data practices, user and business empowerment, and privacy. SNPLab’s my:D app is […]

Tuesday, 14. January 2025

GS1

GS1 CEO discusses the future of barcodes on Euronews

GS1 CEO discusses the future of barcodes on Euronews tatyana.dyadyushko Tue, 01/14/2025 - 11:46 GS1 CEO discusses the future of barcodes on Euronews 14 January 2025 In an engaging interview on Euronews' T
GS1 CEO discusses the future of barcodes on Euronews tatyana.dyadyushko Tue, 01/14/2025 - 11:46 GS1 CEO discusses the future of barcodes on Euronews 14 January 2025

In an engaging interview on Euronews' The Big Question, GS1 President and CEO Renaud de Barbuat introduced the next generation of barcodes, such as QR Codes powered by GS1 and GS1 DataMatrix.

"These QR Codes offer one scan and infinite possibilities," said GS1 CEO. "They enable businesses to manage inventory better, reduce waste, and provide consumers with the transparency they demand."

Transforming Industries

From small retailers in Brazil to global giants like Carrefour, Nestlé or Procter & Gamble, companies are already seeing the benefits:

Efficiency: A Brazilian deli shop reduced waste by 50% using QR Codes. Sustainability: A South Korean water company eliminated plastic bottle labels by integrating QR Codes. Transparency: Consumers can now access detailed product information like ingredients, supply chain details, and recycling options.

A Revolution for Healthcare

Beyond retail, GS1 DataMatrix barcodes are improving healthcare systems worldwide, enhancing patient safety, and saving costs by ensuring full traceability of medicines and medical devices.

Join the Movement

With a target for global implementation by 2027, GS1 invites businesses of all sizes to join this new barcode revolution toward greater innovation, transparency and sustainability.

📺 Watch the full interview and learn more about this exciting transformation.

Discover how next-generation barcodes can revolutionize your business in Retail and read more about global industry leaders calling for the transition to next-generation barcodes.

 


Blockchain Commons

Blockchain Commons 2024 Overview

Making Visions Reality We’ve often written in our yearly reports that the goal of Blockchain Commons is “the creation of open, interoperable, secure & compassionate digital infrastructure to enable people to control their own digital destiny and to maintain their human dignity online”. Our architecture is ultimately based on these and other values, as Christopher wrote in “How My Values Inform

Making Visions Reality

We’ve often written in our yearly reports that the goal of Blockchain Commons is “the creation of open, interoperable, secure & compassionate digital infrastructure to enable people to control their own digital destiny and to maintain their human dignity online”. Our architecture is ultimately based on these and other values, as Christopher wrote in “How My Values Inform Design”, which we published at the start of this year.

Obviously, we can’t just create infrastructure on our own, so we’ve worked toward our values-focused goals by bringing principals together for regular developer meetings and by advocating for the adoption of specifications that we believe achieve these goals (and in many cases by creating those specifications ourselves).

In 2024, much of our work in this regard focused on three different specifcations: dCBOR, Envelope, and FROST.

dCBOR

dCBOR is our deterministic profile for the CBOR data format. We adopted CBOR itself for a variety of reasons, including it being self-describing, extensible, and great for constrained devices. However it had one lack (for our purposes): you couldn’t guarantee that two different devices would encode the same data in the same way, at least not for a variety of weird edge cases (such as whether “1.0” and “1” get encoded the same way, and what you should do for maps with illegally duplicated keys, and how to deal with NaN). This was problematic when we were developing Gordian Envelope, which depends on the same data always being represented the same way so that it always hashes the same—hence the need for dCBOR.

dCBOR received a lot of attention from us in the first half of 2024. We went through five drafts of the specification, revising some of our original definitions and incorporating things like Unicode Normalization Form C (NFC). We’re very grateful to our partners from the IETF CBOR group who laid a foundation for us with the CBOR Common Deterministic Encoding draft, which has been advancing in parallel with dCBOR, and who helped us fill in these gaps in dCBOR itself. We increasingly think we have a strong foundation for deterministic CBOR, as seen by its incorporation into the CBOR playground and a variety of libraries.

Implementation by two or more parties is always our mark of success for a Blockchain Commons specification, and we exceeded that for dCBOR in 2024.

Envelope

Of course dCBOR is just the prelude. Our work on dCBOR was done to make Gordian Envelope a reality. Gordian Envelope is our “smart document” format for the storage and transmission of data. It’s focused on privacy and more specifically on the goals of data minimization & selective disclosure. More and more personal data is going online, and so we need data formats where we each control our own data and decide what to disclose to whom. That’s what the hashed data elision of Gordian Envelope allows: any holder of data can choose what’s redacted, without changing the validity of the data package.

One thing we increasingly saw in 2024 was the need to better explain Envelope: its advantages, how it works, and why it’s important. We kicked that off with a presentation to the IETF on why hashed data ellision is important. We also produced a trio of videos on Envelope: a teaser, “Understanding Gordian Envelope” and “Understanding Gordian Envelope Extensions”.

Meanwhile, we’re continuing to extend Gordian Envelope.

To start with, we’ve done a lot more with the Gordian Sealed Transaction Protocol (GSTP) extension that we introduced right at the end of 2023, to support the transmission of sensitive data across telecommunications means that are insecure, unreliable, or both! A feature presentation on the underlying Request/Response system, explained the fundamentals followed by a feature presentation on GSTP itself, which detailed what GSTP looks like now. Our new GSTP developer page has even more info. We’ve also done some investigation into using GSTP with MuSig2.

We also introduced XIDs, or Extensible Identifiers, a new decentralized identifier that’s built using Envelopes. You can experiment with it in code with our new bc-xid-rust library. Decentralized identifiers have been one of our major interests since the topic was first considered at the first Rebooting the Web of Trust workshop, but they’ve never been a big focus at Blockchain Commons due to the simple fact that we haven’t had a patron focused on the topic. (If that could be you, drop us a line!) We were happy to finally offer a little bit of our own on the topic by presenting an ID that’s really decentralized.

Other Envelope experiments in 2024 were about graphs, decorrelation, and signatures. The privacy preserving Envelope format has a lot of legs. We’re looking forward to seeing how they’re used in the years to come!

Teaser:
Understanding Envelope:
Understanding Extensions:
FROST

FROST was the third major specification that Blockchain Commons focused on in 2024. This is a specification that we didn’t develop ourself: it’s based on a paper by Chelsea Komlo and Ian Goldberg. However, we think it’s very important because it allows for resilient, compact, private multisigs, and so we’ve been giving it all the support that we can.

Our prime work here was holding three FROST meetings in 2024 (the last two sponsored by the Human Rights Foundation). We held one meeting for implementers, to help them share information on the continued design of FROST, and we also held two meetings for developers, focused on helping them to actually get FROST into wallets in the next year. Our recordings and presentations from the meetings are full of great FROST resources from an introduction that we put together to a presentation from the first major wallet to implement FROST.

We’ve also done some work to incorporate FROST into our own Rust and Swift Gordian stacks as a reference, by moving over to fully BIP-340 compliant Schnorr signatures. We’d like to do more, including creating a signtool reference, but we’ll have to see if that opportunity arises in 2025.

Developers Meeting 1 (2024): Jesse Posner & Christopher Allen overviewed FROST in our first meeting for wallet developers. Implementers RT 2 (2024): Our second Round Table for Implementers includes presentations from six FROST implementers on their projects. Developers Meeting 2 (2024): An introduction, UX challenges, the UNiFFI SDK, and a FROST Federation were at our second meeting for wallet developers.

Of course, FROST is just a single member of the new class of Multi-Party Computation (MPC) technology. MuSig falls into the same category, as another Schnorr-based technology where key creation is divided among multiple locations. We talked a bit about MuSig and how it could work with Gordian Envelope at our November Gordian meeting.

We think that all of the Schnorr variants are pretty important for the future of the digital world because of their advantages in resilience, security, and privacy.

Keeping it Real

Obviously, not all of Blockchain Commons’ work is focused on high-level specifications. We also produce ready-to-go libraries that developers can use to implement our specs, as well as reference apps that can be used for experimentation and as examples of best practices.

Our app releases in 2024 included Gordian Seed Tool 1.6 (and more recently Gordian Seed Tool 1.6.2), Gordian Server 1.1, and the brand-new seedtool-cli for Rust (with a full user manual). We’ve also updated our entire Swift stack to Swift 6 and continued the development of Rust libraries such as bc-dcbor-rust, bc-depo-rust, bc-envelope-rust, bc-components-rust, and many more!

Hello to the Wider Wallet Community

Our work has always focused on interactions with a larger community, from our IETF work on dCBOR to our Envelope work with our own Gordian community to our FROST meetings.

In 2024, we were thrilled to expand that community to include even more experts in the world of digital assets and identity. Besides our trio of FROST meetings, which brought together most of the principals in that space, we also hosted other expert-talks at our Gordian Developer meetings. That included a presentation on BIP-85 by Aneesh Karve, a look at the Ledger Seed Tool (which uses Blockchain Commons technology) by Aido, and an overview of Payjoin from Dan Gould.

BIP-85:
Ledger Seed Tool:
Payjoin:

If you have a topic that might be of interest to wallet developers, drop us a line, as we’d love to welcome you to the Gordian Developers community and get you on the schedule for a presentation in 2025.

A Focus on Identity

As we said, decentralized identity (including decentralized identifiers) has been a topic of interest to Blockchain Commons since before the foundation of the organization. Christopher Allen previously founded the Web of Trust workshops specifically to investigate how to fulfill the promise of PGP by returning to the idea of decentralized, peer-to-peer identity. Over the course of a dozen workshops, the conference germinated and developed the idea of DIDs and also supported the development of Verifiable Credentials.

Though we’ve never had a sustaining sponsor focused on identity at Blockchain Commons, we’re still involved in these topics as of 2024. Christopher is an Invited Expert to the new W3C DID 1.1 Working Group and participated in the face-to-face plenary at TPAC. He also is an Invited Expert to the Verifiable Claims 1.1 Working Group. Our biggest presentation to the W3C community last year was on how to create DID documents in dCBOR and what the possibilities are for using Envelope to add elision to DID Controller Docs and Verifiable Claims. This inspired our own work on XIDs, which is our own potential proof-of-concept for a DID 2.0 spec. We’ll be working more with these communities in the next year.

We also published a number of articles on decentralized identity in 2024 that were intended to either offer warnings about our current direction for identity or provide new ways forward.

Foremembrance Day Presentation Edge Identifiers & Cliques Open & Fuzzy Cliques Has our SSI Ecosystem Become Morally Bankrupt?

We think that our articles on Edge Identifiers and different sorts of Cliques offer a whole new approach to identity. It models identity not just as something held by a singular individual, but as something shared between pairs and groups of people, who together share a context. (And this is another area that we’d love to do more work on: if you would too, let us know!)

Our other two articles focused on the dangers of identity. The Foremembrance Day article (and video) talked about how dangerous identity became in WWII, a topic we’ve used as a touchstone when considering what we’re creating today. Then our last article talked about how we think the modern identity work that Christopher kicked off with his foundational article on self-sovereign identity may now be going in the wrong direction.

Git Open Integrity Project

Our biggest project that didn’t quite see release in 2024 was the GitHub Open Integrity project. The idea here is simple: use GitHub as a source of truth by turning a repo into a decentralized identifier. We’ve worked through some CLI setup programs and have a repo that demonstrates the verification we’re doing. We’ve also released the SSH Envelope program that we wrote that supported this project by allowing the signing and validation of SSH keys. (Since, we’ve incorporated the signing into the main envelope-cli-rust).

We’re still working on a README that lays out the whole project, but at the moment it’s backburnered for other work …

Coming Soon!

The last few years have been hard on our partners in the digital identity space, due to high interest rates sucking the money out of Web3 seed funds. That in turn has hurt their ability to support us. As a result, we’ve been applying for grants for some of our work.

At the very end of 2024, we had a grant application approved by the Zcash Foundation to produce an extensible wallet interchange format (“ZeWIF”) for Zcash. Though much of our work to date has been used by the Bitcoin ecosystem, advances like animated QRs, Lifehash and SSKR are of use to anyone working with digital assets (and likely a number of other fields). So, we’re thrilled to get to work with Zcash and share some of our principles such as independence, privacy, and openness (which were all part of our “ZeWIF” proposal). We hope this will also lead to adoption of some of our other specifications in the Zcash ecosystem. Obviously, more on this in Q1, as we have it laid out as a three-month project, running from January to March.

Though we’re thrilled with one-off grants like the HRF and Zcash grants that were approved in 2024, Blockchain Commons is also looking for more stable funding sources to continue our work making the digital-asset space safer and more interoperable. You can help this personally by lending your name to Blockchain Commons as a patron, but we’re also looking for companies who are interested in using and expanding our specs, who want to partner with us to do so. Contact us directly to discuss that!

With your support, we’ll be able to produce another report on great advance in 2026.

Picture Frames Designed by Freepik.

Monday, 13. January 2025

DIF Blog

Member Spotlight: Alastair Johnson of Nuggets

As AI reshapes our digital world, questions of privacy and digital identity become increasingly critical. DIF member Nuggets is tackling these challenges head-on with their Private Personal AI and Verified Identity for AI Agents. We interviewed CEO Alastair Johnson to learn how Nuggets is pioneering new approaches to protect individual

As AI reshapes our digital world, questions of privacy and digital identity become increasingly critical. DIF member Nuggets is tackling these challenges head-on with their Private Personal AI and Verified Identity for AI Agents.

We interviewed CEO Alastair Johnson to learn how Nuggets is pioneering new approaches to protect individual privacy while enabling secure human-AI interactions through their decentralized identity wallet technology. Their insights reveal how enhanced AI capabilities and robust privacy protections can work together to build a more secure digital future.

Can you provide a brief overview of Nuggets and its mission in the digital identity and payments space?

Nuggets is a decentralized identity wallet and payment platform that guarantees trusted transactions, verifiable credentials, uncompromised compliance, and the elimination of fraud.

Our mission is to fundamentally change how personal data is stored, providing unparalleled privacy for everyone and everything and helping to create a radically safer and more secure internet.

What inspired the development of your two new solutions: Private Personal AI and Verified Identity for AI Agents?

Private Personal AI and Verified Identity for AI Agents were driven by the needs of our customers. These customers wanted to use AI in an education and healthcare setting but were worried about the privacy of their user data. They needed to effectively manage consent and identity authentication while ensuring privacy and security.

How does the Private Personal AI solution empower individuals in their interactions with AI systems and ensure data privacy and security?

As we’re seeing AI advancing, human-machine interactions are blurring.

The future demands a private, trustable human-AI interface that allows individuals to control their digital identities and data.

The Nuggets wallet ensures data is private and enables users to authenticate and pay for products and services without sharing and storing their data.

Nuggets empowers users with a self-sovereign wallet that prioritizes personal data protection. Your personal information remains completely private and under your exclusive control. Only you can choose to selectively share specific preferences or data with AI agents, ensuring that your sensitive information stays secure and accessible solely to you at all times.

How does the Verified Identity for AI Agents solution benefit organizations deploying autonomous AI agents, and why is it important for AI agents to have their own sovereign digital identities?

In an era where AI agents are increasingly autonomous, establishing trusted digital identities for these agents has become crucial. Nuggets provides a comprehensive framework that ensures AI agents can operate securely and independently while maintaining accountability and trust.

We establish unique, verifiable digital identities for AI Agents. Each agent receives a sovereign identity that's cryptographically secured and fully auditable, allowing them to interact with systems and services while maintaining clear chains of attribution and responsibility.

Alongside establishing the Agent Identity, we enable Agent Authentication and Authorization with robust security measures to ensure only authorized agents can access business systems and data. This provides secure methods for verifying agent identities, preventing unauthorized access, and maintaining the integrity of AI infrastructure.

How do your solutions address concerns around AI accountability, trust, and potential security threats, as highlighted by industry leaders like those in the Salesforce report?

By having a verified source of data, it is both accountable and can be trusted.

Organizations can safeguard sensitive data from emerging cybersecurity risks by leveraging confidential computing and decentralized self-sovereign identity technologies. Our approach creates a secure, isolated computing environment that prevents unauthorized data access and minimizes the potential for breaches in AI systems. By implementing these advanced protection mechanisms, companies can confidently utilize AI technologies while maintaining strict control over their proprietary and confidential information.

How do you envision these products impacting the adoption of AI technologies across various industries?

Private Personal AI with verified identities represents a transformative approach to AI integration. By addressing critical concerns around privacy, security, and accountability, these technologies could accelerate AI adoption across multiple domains, creating more trustworthy and sophisticated human-AI interactions.

The future of AI lies not just in technological capability but in building systems that fundamentally respect individual privacy and maintain transparent, verifiable interactions.

What are Nuggets' plans for future developments in digital identity and AI?

We’ve got some new products launching in Q1 2025 that place personal privacy and user autonomy at the forefront of AI and AI Agents. We are working closely with partners to create products that respect human agency. Our commitment goes beyond mere compliance—we actively design systems that give users unprecedented control, transparency, and peace of mind in their digital interactions, which is more crucial now than ever.

How can our readers learn more?

If anyone’s interested in learning more about these products and how they could work within their organization we’d love to chat. You can reach out to us here.

Further reading on each product can be done via our website on the following product pages: Private Personal AI and Verified Identity for AI Agents.

Friday, 10. January 2025

FIDO Alliance

White Paper: Secure Payment Confirmation

Editors Marc Findon, Nok Nok LabsJonathan Grossar, MastercardFrank-Michael Kamm, Giesecke+DevrientHenna Kapur, VisaSue Koomen, American ExpressGregoire Leleux, WorldlineAlain Martin, ThalesStian Svedenborg, BankID BankAxept 1. Introduction Global e-commerce is booming and is […]
Editors

Marc Findon, Nok Nok Labs
Jonathan Grossar, Mastercard
Frank-Michael Kamm, Giesecke+Devrient
Henna Kapur, Visa
Sue Koomen, American Express
Gregoire Leleux, Worldline
Alain Martin, Thales
Stian Svedenborg, BankID BankAxept

1. Introduction

Global e-commerce is booming and is expected to reach more than $6T by the end of 20241. Having the ability  to sell products online has provided great opportunities for merchants to sell goods and services beyond their  local market; however, it comes with increased fraud. In fact, it is estimated that in 2023, global ecommerce  fraud was roughly to reach $48B1, with the US accounting for 42% of that and the EU with about 26%. 

Download the White Paper

1.1 Current Challenges in Remote Commerce 

There are many types of ecommerce fraud, but the most prevalent type is transaction fraud. Transaction fraud  occurs when a transaction is made on a merchant site with a stolen card and/or stolen credentials. Stolen  credentials are readily available on the dark web to those who know how to access and use them. 

To address those concerns, measures have been introduced to increase the overall security of remote  commerce transactions, including tokenization of payment credentials and cardholder authentication. In some  countries, regulations are mandating the adoption of either or both measures, such as in India or in Europe  (second Payment Services Directive PSD2). These regulations are meant to ensure secure remote transactions;  however, they add complexity to the checkout flow, as they may require a switch between the merchant and  another interface, such as a bank’s interface. 

Unfortunately, additional authentication may add friction which can result in cart abandonment. The main  reasons for cart abandonment include a distrust in the merchant website or a complicated check out flow.  Customers prefer a simple payment process that doesn’t add friction such as that caused by payment failure, the need to respond to a one-time password (OTP) on a separate device, or the need to login into a banking  application. 

1.2 How FIDO can help 

The use of biometric authentication enabled through the Fast Identity Online (FIDO) Alliance standards is an  opportunity to deliver a better user experience during the authentication process and hence reduce the risk of  transaction abandonment. 

FIDO has established standards that enable phishing-resistant authentication mechanisms and can be accessed  from native applications and from the most popular browsers – thereby enabling a secure and consistent  experience across the channels used by consumers. FIDO refers to ‘passkeys’ as the FIDO credentials based on  FIDO standards, used by consumers for passwordless authentication. 

The World Wide Web Consortium (W3C) has developed Secure Payment Confirmation (SPC). SPC is a web API  designed to enhance the consumer experience when authenticating to a payment transaction using FIDO  authentication, and to simplify compliance with local regulations (such as PSD2 and dynamic linking in Europe). 

1.3 Scope 

This whitepaper intends to: 

Define Secure Payment Confirmation (SPC) and the benefits that it brings when FIDO is used to authenticate payment transactions 

1 https://www.forbes.com/advisor/business/ecommerce statistics/#:~:text=The%20global%20e%2Dcommerce%20market,show%20companies%20are%20taking%20advantage.

List the current SPC payment use cases that can deliver those benefits and illustrate consumer  journeys  Provide a status on SPC support and the list of enhancements that could be added to the web standard to further improve security and user experience 2. Secure Payment Confirmation (SPC) Benefits

Secure Payment Confirmation (SPC) is an extension to the WebAuthn standard, and aims to deliver the  following benefits: 

A browser native user experience that is consistent across all merchants and banks Cryptographic evidence of authentication (FIDO assertion) including transaction details signed by a  FIDO authenticator  Cross origin authentication – For example, even if passkeys are created with the bank as the Relying  Party, merchants can invoke cardholder authentication with passkeys within their environment,  using input parameters received from the bank, so there is no need to redirect the consumer to the  bank to authenticate with passkeys.  

2.1 Browser Native User Experience  

SPC introduces a standardized payment context screen showing details such as a merchant identifier, the card  logo, the last 4 digits of the card number, and the transaction amount. The consumer is invited to explicitly  agree to the transaction information displayed and then authenticate. Therefore, SPC can be experienced as a  mechanism to collect consent from the consumer about the transaction details. 

As in standard WebAuthn, the payment context screen is controlled by the user’s browser which renders  common JavaScript presentation attacks ineffective. The screen provides increased security, as it ensures that  malicious web content cannot alter or obscure the presentation of the transaction details to the user – the  browser display always renders on-top of the web content from the triggering website. Figure 1 depicts an  example of the SPC experience in chrome.

Figure 1 Example of SPC experience in chrome 

2.2 Generation of FIDO Assertion  

With SPC, the transaction-related information displayed to the consumer, such as the merchant identifier and  transaction amount, is sent securely to the FIDO authenticator and is signed by the same authenticator  (transaction data signing). 

The FIDO assertion generated by the authenticator reinforces compliance with some regulations as it does with  the dynamic linking requirement under PSD2 in Europe, because the merchant identifier and transaction  amount will be signed by the authenticator itself. When combined with the browser native user experience  described in section 2.1, the relying party can be confident that the user was shown and agreed to the  transaction details. 

2.3 Cross Origin Authentication  

When using FIDO without SPC, a consumer that creates a passkey with a relying party will always need to be in  the relying party’s domain to authenticate with that passkey. In the remote commerce payment use case, this  means that the consumer typically needs to leave the merchant domain and be redirected to the bank’s domain  for authentication. 

With SPC, any entity authorized by the relying party can initiate user authentication with the passkey that was  created for that relying party. For example, a merchant may be authorized by a bank to authenticate the  cardholder with the bank’s passkey. 

Note that the mechanism for the relying party to authorize an entity to invoke SPC may vary. For example, a  bank may share FIDO credentials with the merchant during an EMV 3DS interaction or through another  integration with a payment scheme. The merchant will then be able to use SPC to initiate the payment  confirmation and authentication process with a passkey, even if that passkey was created with the bank.  Ultimately, the bank maintains the responsibility to validate the authentication. 

2.4 Interoperability With Other Standards 

SPC can be used in combination with other industry standards such as EMV 3-D Secure and Secure Remote  Commerce (SRC), both of which are EMVCo global and interoperable standards.

3. SPC Use Cases

SPC can be used to streamline payments in a variety of remote commerce checkout scenarios such as guest  checkout or a checkout using a payment instrument stored on file with a merchant.  

In each of those payment scenarios, the relying party may be the issuer of the payment instrument (the bank), or a payment network on behalf of the bank. 

The flows provided in this Chapter are for illustrative purposes and may be subject to compliance with  applicable laws and regulations. 

3.1 SPC With Bank as Relying Party 

The creation of a passkey can be initiated outside of or during the checkout process: 

Within the banking interface:  For example, when the consumer is within the banking application and registers a passkey  with their bank, in which case the passkey will be associated to one or multiple payment  cards and to the consumer device  Within the merchant interface:  For example, when the consumer is authenticated by the bank during an EMV 3DS flow and  is prompted to create a passkey with the bank to speed up future checkouts – in which case  the passkey will be associated to the payment card used for the transaction (and to additional  payment cards depending on the bank’s implementation), as well as to the device used by the  consumer 

Figure 2 depicts the sequence (seven steps) of a passkey creation during a merchant checkout, where the  merchant uses EMV 3DS and the consumer is authenticated by their bank:

Figure 2: Passkey creation during checkout

Once the passkey creation is complete, any merchant that has received the passkey information (which includes  FIDO identifiers and Public Key) from the bank, through a mechanism agreed with the bank or the payment  scheme, will be able to use SPC. Such a mechanism may include EMV 3DS or another integration with the  payment scheme. For example, a merchant who implements EMV 3DS (i.e., version 2.32) will be able to benefit  from SPC through the following steps: 

1. When the merchant initiates EMV 3DS to authenticate the consumer, the bank decides whether an  active authentication of the cardholder is necessary. If the decision is to perform the active  authentication of the cardholder, the bank can first retrieve one or several passkeys associated with the  card used for the transaction, verify that the consumer is on the same registered device, and then  returns the passkey(s) information to the merchant. 

2. The merchant invokes the SPC web API to a SPC-supporting browser, including a few parameters in the  request, such as the passkey information, card / bank / network logos, the merchant identifier and the  transaction amount. 

3. If the browser can find a match for one of those passkeys on the device used by the consumer, the  browser displays the logos, merchant identifier and the transaction amount to the consumer, and  prompts for authentication with the passkey. 

4. The authentication results are returned to the merchant, who in turn will share those results with the  bank for validation through the EMV 3DS protocol. 

Figure 3 depicts an example of an authentication flow using SPC and EMV 3DS, with a previously registered  passkey:

Figure 3: Authentication sequence using SPC and EMV 3DS

3.2 SPC With Payment Scheme as Relying Party 

In some payment scenarios, payment schemes can be a relying party on-behalf of the banks to remove the need  for banks to deploy a FIDO infrastructure, thereby scaling the adoption of passkeys faster. 

The creation of a passkey can be initiated outside of or during the checkout process:

Outside of the checkout: for example, when the consumer is within the banking application and the  bank invites the consumer to create a passkey for faster and more secure transactions, the passkey  can be created with the payment scheme as the relying party, and will be associated by the payment  scheme to one or multiple payment cards and to the consumer device; or  Before, during or after a checkout: for example, the consumer may be prompted to create a passkey  for faster and more secure transactions at merchants supporting the payment scheme’s payment  method. The passkey will be associated by the payment scheme to one or multiple payment cards  and to the consumer device, once the identity of the consumer has been verified by the bank. Figure 4 depicts this sequence. 

Figure 4 Passkey creation during checkout 

Once the passkey creation is complete, any merchant that is using the authentication facilitated by the payment  scheme will be able to benefit from SPC: 

The merchant checks with the payment scheme that a passkey is available for the card used in the  transaction and retrieves the passkey information from the payment scheme.   The merchant invokes the SPC web API with the merchant identifier and transaction amount.  If the browser can find a match for one of those passkeys on the device used by the consumer, the  browser displays the merchant identifier and the transaction amount to the consumer, card / bank /  network logos, then prompts for authentication with the passkey.  The authentication results are returned to the payment scheme that validates the results. The  payment scheme shares those results with the bank, during an authorization message, for the bank  to review and approve the transaction. Figure 5 shows this sequence.

Figure 5: Authentication sequence using SPC
(left to right)
1. & 2. Checkout at the merchant’s store
3. Passkey is found, transaction details displayed and consent is gathered
4. Device authenticator prompts cardholder for gesture
5. Confirmation of gesture
6. Transaction completed by the merchant

3.3 Summary of SPC Benefits 

The benefits provided by SPC include: 

Cross-origin authentication – Any merchant authorized by a Relying Party can request the  generation of a FIDO assertion during a transaction even when they are not the relying party. This  provides a better user experience as there is no redirect that is required to the relying party to  perform consumer authentication.  Consistent user experience with increased trust – With SPC, the consumer has a consistent user  experience across all merchants and independently of who plays the role of relying party. In each  case, the consumer will see a window displayed by the browser, that includes payment details, the  logos of their card / bank / payment scheme, increasing the trust in using FIDO authentication for  their payments.  Increased security – With SPC, the FIDO assertion will include payment details in the cryptogram  generation such as the merchant identifier and transaction amount, making it difficult to modify any  of those details in the transaction without being detected by the bank or payment scheme. This also  simplifies the compliance with local regulations such as PSD2 regulation related to dynamic linking. 4. Status of SPC Support and Future Enhancement

4.1 Availability 

Secure Payment Confirmation is currently published as a W3C Candidate Recommendation, and there is on going work to include this as an authentication method in EMVCo specifications. 

At the time of writing, the availability of the Secure Payment Confirmation API is limited to:

Google Chrome and Microsoft Edge browsers  MacOS, Windows, and Android operating systems. 

4.2 Future Enhancements 

The W3C Web Payments Working Group continues to work and iterate on Secure Payment Confirmation with  the goal of improving the security and the user experience when consumers authenticate for payments on the  web.  

Features currently under consideration include: 

Improve user and merchant experiences when there is not a credential available on the current  device (i.e., a fallback user experience)  Improve consumer trust with additional logos being displayed to the user, such as bank logo and  card network logo  Improve security with support for device binding, with the browser providing access to a  browser/device-bound key  Consider additional use cases such as recurring payments or support for roaming and hybrid FIDO  authenticators

An example of enhanced SPC transaction UX that is under review is illustrated in Figure 6. 

Figure 6: SPC transaction UX under review

5. Conclusion

Secure Payment Confirmation (SPC) is a web standard that has been designed to facilitate the use of strong authentication during payment transactions with best-in-class user experience, where the relying party can be a  bank or a payment scheme.  

The main benefits of SPC are to deliver an improved user experience, with the display of transaction details that  the consumer approves with FIDO authentication, and to enable cross-origin authentication when a merchant  authenticates a consumer without the need to redirect to the relying party (the bank or the payment scheme).  

SPC also facilitates the inclusion of the transaction details within the FIDO signature, which can help deliver  higher security and/or simplify the compliance with local regulations.

6. Acknowledgements

The authors acknowledge the following people (in alphabetic order) for their valuable feedback and comments:

Boban Andjelkovic, BankID BankAxept  John Bradley, Yubico  Karen Chang, Egis  Jeff Lee, Infineon  Olivier Maas, Worldline 7. References

[1] “EMV 3-D Secure,” [Online]. Available: https://www.emvco.com/emv-technologies/3-d-secure/.
[2] “Secure Payment Confirmation,” [Online]. Available: w3.org/TR/secure-payment-confirmation/. 
[3] “Secure Remote Commerce,” [Online]. Available: https://www.emvco.com/emv-technologies/secure remote-commerce/.


IDunion

Cost savings through Organizational Digital Identities

Cost saving estimation for the Use Case „Know your Supplier” based on automated master data management with EUDI wallets for legal entities The paper was prepared by the scientific research provider for the program, „Begleitforschung Sichere Digitale Identitäten,“ led by the European School for Management and Technology (ESMT), on behalf of the BMWK. 1 Introduction […]
Cost saving estimation for the Use Case „Know your Supplier” based on automated master data management with EUDI wallets for legal entities

The paper was prepared by the scientific research provider for the program, „Begleitforschung Sichere Digitale Identitäten,“ led by the European School for Management and Technology (ESMT), on behalf of the BMWK.

1 Introduction to the Showcase Program Secure Digital Identities

The German Federal Ministry for Economic Affairs and Climate Action (BMWK) is the initiator and funder of the „Secure Digital Identities“ Showcase Program. Over the course of four years (2021 to 2024), the four showcase projects – IDunion, ID-Ideal, ONCE, and SDIKA – have worked on more than 100 use cases related to secure digital identities. These projects have also developed various types of wallets, which have been tested and implemented in multiple pilot environments. The Showcase Program has supported Research & Development (R&D) efforts, resulting in the creation of seven edge wallets, three organizational wallets, and one cloud wallet. Within this framework, IDunion has specifically focused on developing organizational identities, including use cases such as „Know-your-supplier.“ This paper outlines the cost savings achieved through automated supplier master data management, leveraging EUDI wallets for legal entities and the EU Company Certificate Attestation (EUCC) issued by QEAA providers in accordance with Company Law. The paper was prepared by the scientific research provider for the program, „Begleitforschung Sichere Digitale Identitäten,“ led by the European School for Management and Technology (ESMT), on behalf of the BMWK.

2 Cost saving estimation

Current situation: Currently corporations maintain their supplier and customer master data records manually, which is time-consuming and leads to errors and redundancies. Large corporations need to maintain and assure high quality of several hundred up to millions of master data records. The maintenance costs per single data set was estimated to 11 €/year. The master data set considered for the cost estimation was limited to company name and address data and therefore is a subset of the data that will be available with the PID for legal person and the EUCC.
Solution based on EUDIW: EU Digital Identity Wallets (EDIW), PID for legal entities and public registry extracts (e.g. EUCC) as QEAAs enable almost completely automated management of business partner data. Suppliers present their attestations from their legal entity wallet to customers or legal entity wallets of other business partners. Presentation, verification and the transfer to internal systems is performed automatically. This reduces the number of proprietary data records maintained in parallel and minimizes manual, error-prone data entry.
Cost savings: The solution enables annual savings of estimated €85 billion for German companies. Only German companies with more than 2 million sales/year where included in this estimation. It was assumed that only their European business partners provide their data as verifiable attestations. This underscores the transformative impact of the EUDIW solution on master data management and its strategic importance for the private sector on the path to digital efficiency.

Conservative assumptions for the estimation model below1

Estimation of master data sets: The estimation is done by estimating the number of potential B2B relationships of ompanies and assuming that a B2B relationship generates at least one master data set. In practice, however, master data is often stored and replicated in different systems. As this is not considered, the cost savings in the estimation are therefore calculated conservatively. Annual master data maintenance costs: On average, a company incurs annual costs of around €11 per master data maintenance. This estimation is based on an estimation performed by „Verband Deutscher Automobilhersteller“ (VDA). Number of master data sets for large companies: An average of 300,000 master data was assumed for large companies based on project estimates and VDA work. It was also assumed that 60% of the master data per company is attributable to the EU suppliers (i.e. 180,000 master data items on average for large companies) and therefore only these are relevant for the EUDIW-based solution. Scaling based on turnover: The estimated number of B2B relationships of large companies can be scaled to other company sizes based on turnover2 Implementation costs: The implementation costs are assumed to be €600 per year for small companies (<€10 million turnover). These costs are scaled to the larger company categories based on turnover. In addition to the implementation costs, companies must purchase the mentioned attestations (LPID, EUCC). The assumed costs are €1,000 per year. These costs are independent of the size of the company. Further implementation costs such as integration into ERP/CRM modules are neglected, as it is assumed that the market leaders will integrate the EUDIW modules accordingly. Very small companies: Due to their high number and heterogeneity in turnover and employee structure, very small companies are not included in the modeling, which leads to a more conservative savings estimate3

Estimation model4

Potential savings for German companies5

Current costs for supplier master data maintenance€ 85.3bnImplementation costs€ 0.35bnAnnual costs for the EU Digital Wallet€ 0.25bnPotential total savings€ 84.7bn

Current master data maintenance costs

Enterprise sizeNumberCosts per company6Total costsBig Enterprises715,500€ 2m€ 30.7bnSmall Medium Enterprises850,500€ 0.8m€ 40.4bnSmall Enterprises9185,000€ 0.08m€ 14.2bnTotal Maintenance Costs€ 85.3bn

Implementation Costs

Enterprise SizeNumberCost per enterprise10Total CostsBig Enterprises15,500€ 6,300€ 98mSmall Medium Enterprises50,500€ 2,500€ 126mSmall Enterprises185,500€ 600€ 111mTotal Implementation Costs€ 335m (€ 0.35bn)

Annual EUDI wallet costs

Enterprise SizeNumberCost per enterprise11Total CostsBig Enterprises15,500€ 1,000€ 15.5mSmall Medium Enterprises50,500€ 1,000€ 50.5mSmall Enterprises185,500€ 1,000€ 185mTotal Implementation Costs€ 251m (€ 0.25bn) Unless otherwise stated, the source is based on the calculations and statements of the IDunion project and the
accompanying research SDI Based on: Federal Statistical Office (2023): Turnover sizes per company class, to be found at:
https://www.destatis.de/DE/Themen/BranchenUnternehmen/Unternehmen/Unternehmensregister/Tabellen/unternehmen-umsatzgroessenklassen-wz08.html This does not include 2.6 million very small enterprises, as there is a very high turnover variance (between €0
and €2 million turnover) and no valid statements can be made about average master data sets (see Federal
Statistical Office (2023): Genesis-Online. To be found at https://wwwgenesis.destatis.de/genesis/online?sequenz=statistikTabellen&selectionname=48121#abreadcrumb) The figures have been rounded for better understanding. The exact calculations were made in MS Excel, if
desired, this data can be made available. Please contact werner.folkendt@de.bosch.com Calculation method: Potential total savings = Current costs for supplier master data maintenance – cost of
Implementation costs – Annual costs of the EU Digital Wallet Based on estimated maintenance costs of € 11 per master data (source: IDunion and Verband Deutscher
Automobilhersteller) A large company has a turnover of > €50 million (source: Federal Statistical Office, 2023) A medium-sized company has a turnover of €10-50 million (source: Federal Statistical Office, 2023) A small business has a turnover of € 2-10 million (source: Federal Statistical Office, 2023). Very small enterprises
are not included here. The costs were calculated based on the willingness to pay of small companies (€600) and scaled for mediumsized and large companies. (Source: IDunion) Corresponds to the cost of a wallet per year (source: IDunion)

Thursday, 09. January 2025

Origin Trail

Trace Labs, Core Developers of OriginTrail, Welcomes Fady Mansour to the Advisory Board

Trace Labs, the core builders behind the OriginTrail ecosystem, is pleased to announce the expansion of its advisory board with the addition of Fady Mansour, lawyer and partner with Friedman Mansour LLP and Managing Partner at Ethical Capital Partners. With his wide breadth of experience, Mr. Mansour brings important expertise in regulatory matters, particularly in online data protection. In his

Trace Labs, the core builders behind the OriginTrail ecosystem, is pleased to announce the expansion of its advisory board with the addition of Fady Mansour, lawyer and partner with Friedman Mansour LLP and Managing Partner at Ethical Capital Partners. With his wide breadth of experience, Mr. Mansour brings important expertise in regulatory matters, particularly in online data protection.

In his advisory role, Mr. Mansour will provide strategic guidance to bolster OriginTrail’s strategic importance for combating illicit online content, safeguarding intellectual property, and fostering reliable AI applications for a safer digital landscape in its Internet-scale ambition.

OriginTrail ecosystem, powered by decentralized knowledge graph technology, is dedicated to promoting responsible AI and sustainable technology adoption. By joining the advisory board, Mr. Mansour will be instrumental in shaping Trace Labs’ mission to drive ethical, human-centric technological innovation across industries.

Mr. Mansour completes the Trace Labs advisory board of existing members:

Dr. Bob Metcalfe, Ethernet founder, Internet pioneer and 2023 Turing Award Winner; Greg Kidd, Hard Yaka founder and investor; Ken Lyon, global expert on logistics and transportation; Chris Rynning, Managing Partner at AMYP Venture — Piëch — Porsche Family Office; Toni Piëch, Founder & Chair of Board at Toni Piëch Foundation & Piëch Automotive; Fady Mansour, Managing Partner at Ethical Capital Partners.

Trace Labs, Core Developers of OriginTrail, Welcomes Fady Mansour to the Advisory Board was originally published in OriginTrail on Medium, where people are continuing the conversation by highlighting and responding to this story.


Bridging trust between humans and AI agents with Decentralized Knowledge Graph (DKG) and ElizaOS…

Bridging trust between humans and AI agents with Decentralized Knowledge Graph (DKG) and ElizaOS framework In the realm of artificial intelligence (AI), particularly in robotics, trust is not just a luxury — it’s a necessity. The Three Laws of Robotics, conceptualized by the visionary Isaac Asimov, provide a well-known foundational ethical structure for robots: A robot may not injure a hum
Bridging trust between humans and AI agents with Decentralized Knowledge Graph (DKG) and ElizaOS framework

In the realm of artificial intelligence (AI), particularly in robotics, trust is not just a luxury — it’s a necessity. The Three Laws of Robotics, conceptualized by the visionary Isaac Asimov, provide a well-known foundational ethical structure for robots:

A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Ensuring these laws are adhered to in practice requires more than just programming; it necessitates a system where the knowledge upon which AI agents operate is transparent, verifiable, and trusted. This is where OriginTrail Decentralized Knowledge Graph (DKG) comes into play, offering a groundbreaking approach to enhancing the trustworthiness of AI.

Transparency and verifiability

One of the key aspects of the DKG is its capacity for transparency. By organizing AI-grade Knowledge Assets (KAs) in a decentralized manner, DKG ensures that the data AI agents use to make decisions can be traced back to their origins, with any tampering or modifications of that data being transparently recorded and verifiable on the blockchain. This is crucial for the First Law, where transparency in data sourcing can prevent AI from making decisions that might harm humans due to incorrect or biased information.

Ownership and control

The DKG allows for each Knowledge Asset to be associated with a non-fungible token (NFT), providing clear ownership and control over the information. This aspect directly impacts how AI agents adhere to the Second Law. Namely, by allowing agents to own their knowledge, DKG empowers AI agents to respond to human commands based on a robust, reliable data set that they control, ensuring they follow human directives while also adhering to the ethical boundaries set by the laws. This capability also allows agents to monetize Knowledge Assets that they have created (i.e. charge other agents (AI or human) for accessing their structured data), enabling agents’ economic independence.

Contextual understanding and decision-making

The semantic capabilities of DKG provide AI with a richer context for understanding the world — an ontological, symbolic world model to complement GenAI inferencing, which is vital for the Third Law. The interconnected nature of knowledge in the DKG means it is contextualized better, allowing AI to make decisions with a comprehensive view of the situation. For example, understanding the broader implications of self-preservation in contexts where human safety is paramount ensures that robots do not prioritize their existence over human well-being.

Building trust through decentralization

Decentralization is at the heart of the DKG’s effectiveness in fostering trust:

Avoiding centralized control: Traditional centralized databases can be points of failure or manipulation, especially in multi-agent scenarios. In contrast, DKG distributes control, reducing the risk of misuse or bias in AI decision-making. This decentralized approach helps build a collective, trustworthy intelligence that aligns with human values and safety. Community contribution: DKG facilitates a crowdsourced approach to knowledge, where contributions from various stakeholders can enrich the AI’s understanding of ethical and practical scenarios, further aligning AI behavior with the Three Laws. This community aspect also encourages ongoing vigilance and updates to the knowledge base, ensuring AI systems remain relevant and safe. Grow and read AI Agents’ minds with the ChatDKG framework powered by DKG and ElizaOS

The upgrade of ChatDKG marks a pioneering moment, combining the power of the OriginTrail Decentralized Knowledge Graph (DKG) with the ElizaOS framework to create the first AI agent of its kind. Empowered by DKG, ChatDKG utilizes the DKG as collective memory to store and retrieve information in a transparent, verifiable manner, allowing for an unprecedented level of interaction where humans can essentially “read the AI’s mind” by accessing its data and thought processes. This unique feature not only enhances transparency but also fosters trust between humans and AI.

The integration with ElizaOS is based on a dedicated DKG plugin, with which ElizaOS agents can create contextually rich knowledge graph memories, storing structured information about their experiences, insights, and decisions. These memories can be shared and made accessible across the DKG network, forming a collective pool of knowledge graph memories. This allows individual agents to access, analyze, and learn from the experiences of other agents, creating a dynamic ecosystem where collaboration drives network effects between memories. See an example memory knowledge graph created by the ChatDKG agent here.

Tapping into collective memory will be enhanced with strong agent reputation systems and robust knowledge graph verification mechanisms. Agents can assess the trustworthiness of shared memories, avoiding hallucinations or false data while making decisions. This not only enables more confident and precise decision-making but also empowers agent swarms to operate with unprecedented coherence and accuracy. Whether predicting trends, solving complex problems, or coordinating large-scale tasks, agents will be able to achieve a new level of intelligence and reliability.

Yet, this is only the beginning of the journey toward “collective neuro-symbolic AI,” where the synthesis of symbolic reasoning and deep learning, enriched by shared, verifiable knowledge, will redefine the boundaries of artificial intelligence. The possibilities for collaborative intelligence are limitless, paving the way for systems that think, learn, and evolve together.

Moreover, ChatDKG invites users to contribute to its memory base, growing and refining its knowledge through direct interaction. This interactive approach leverages the ElizaOS framework’s capabilities to ensure that each exchange informs the AI and enriches its understanding, making it a dynamic participant in the evolving landscape of knowledge.

Talk to the ChatDKG AI agent on X to grow and read his memory!

Bridging trust between humans and AI agents with Decentralized Knowledge Graph (DKG) and ElizaOS… was originally published in OriginTrail on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wednesday, 08. January 2025

Next Level Supply Chain Podcast with GS1

Behind the Barcode: How Modernization, Sustainability & AI are Shaping the Supply Chain

As 2025 approaches, supply chain trends like digital transformation, AI, sustainability, and smart logistics remain top of mind. In this episode, James Chronowski, Vice President of Strategic Account Management at GS1 US, joins hosts Reid Jackson and Liz Sertl to explore how data quality plays a crucial role in addressing these trends. James offers practical insights for businesses to tackle eme

As 2025 approaches, supply chain trends like digital transformation, AI, sustainability, and smart logistics remain top of mind.

In this episode, James Chronowski, Vice President of Strategic Account Management at GS1 US, joins hosts Reid Jackson and Liz Sertl to explore how data quality plays a crucial role in addressing these trends. James offers practical insights for businesses to tackle emerging challenges and seize opportunities in an evolving supply chain landscape.

 

In this episode, you’ll learn:

The top trends shaping the supply chain industry in 2025 

Why data quality and governance are essential for businesses

How to build resilient supply chains in a rapidly changing environment

 

Jump into the conversation:

(00:00) Introducing Next Level Supply Chain

(02:36) Current and future trends in the supply chain

(08:20) The foundational role of data governance 

(11:42) How businesses can be more resilient in 2025

(13:52) James Chronowski’s favorite tech

 

Connect with GS1 US:

Our website - www.gs1us.org

GS1 US on LinkedIn

 

Connect with the guest:

James Chronowski on LinkedIn

Tuesday, 07. January 2025

Elastos Foundation

BeL2 Arbiter Network: Your Setup Guide to Bitcoin DeFi and Incentive Rewards

The BeL2 Arbiter Network enables decentralized, non-custodial Bitcoin DeFi by connecting Bitcoin’s mainnet to EVM-compatible smart contracts through cryptographic proofs. This marks a pivotal moment where Bitcoin can interact and communicate with other blockchains, uniting years of innovation into a new non-custodial financial layer built on digital gold. Arbiters play a critical role in this […]

The BeL2 Arbiter Network enables decentralized, non-custodial Bitcoin DeFi by connecting Bitcoin’s mainnet to EVM-compatible smart contracts through cryptographic proofs. This marks a pivotal moment where Bitcoin can interact and communicate with other blockchains, uniting years of innovation into a new non-custodial financial layer built on digital gold.

Arbiters play a critical role in this ecosystem, ensuring transaction fairness, resolving disputes, and earning fees. With the recent deployment of BeL2’s Beta Arbiter Network, now is the perfect opportunity to become one of the first to support the system and earn rewards, including the upcoming incentive program launching this Wednesday!

In this article, we provide a comprehensive guide on getting started, complete with a PDF and video support. Alongside the videos linked below, use the provided PDF as detailed documentation to assist you in the setup process. If you wish to instead speak with an AI Assistant, our BeL2 GPT is available 24/7 to help you with this task too. Okay, let’s jump in!

1. BeL2 Arbiter Prerequisites

Before beginning the setup, ensure you have the following:

Hardware Requirements: CPU: Minimum 2 cores RAM: 8GB+ Storage: 100GB SSD+ Network: 100Mbps internet connection Software and Wallets: Elastos Smart Chain Wallet: Funded with ELA tokens. Metamask Browser Extension: For Arbiter registration. Bitcoin Wallet: Such as Unisat or OKX browser extension. Go (1.20 or newer): Install from Go’s official site. Mobile: Use Web3 Essentials’ built-in browser. Desktop: Use Metamask and Unisat wallets. Should you need to add the Elastos Smart Chain (ESC) network on Metamask, please refer to https://elastos.info/explorers/ Elastos networks section 2. Register as an BeL2 Arbiter

Visit the Arbiter Beta Portal: https://arbiter.bel2.org/ Connect Your Wallet: Use Metamask to connect your ESC wallet. Set Up Arbiter Settings: Fee Rate: Define your annual fee rate (e.g., 10%). Service Commitment Deadline: Specify a time period for your service. BTC Address: Enter your Bitcoin address and public key. Stake: Choose between staking ELA (minimum 1 ELA) or NFTs for collateral. Finalize Registration: Submit your configuration to complete the process. 3. Set Up the BeL2 Arbiter Signer Node

Please follow the the provided PDF for detailed documentation.

Congratulations! By completing this setup, you’ll contribute to a decentralized BTC finance ecosystem while earning rewards as an Arbiter. This week, we will be announcing the community incentives program where you can earn additional ELA rewards for participation in the system, including Native Bitcoin lending integration for the first utility access to the BTC lending DApp. Did you enjoy this article? To learn more, follow Infinity for the latest updates here!


DIF Blog

DIF Welcomes the Camino Network Foundation!

We are thrilled to announce that the Camino Network Foundation has joined the Decentralized Identity Foundation (DIF) as our newest member. This partnership marks an exciting development in our collective goals to build an open ecosystem for decentralized identity, particularly in the global travel space. Advancing Digital Identity in Travel

We are thrilled to announce that the Camino Network Foundation has joined the Decentralized Identity Foundation (DIF) as our newest member. This partnership marks an exciting development in our collective goals to build an open ecosystem for decentralized identity, particularly in the global travel space.

Advancing Digital Identity in Travel

Camino Network brings unique expertise in blockchain technology for the travel industry, operating a specialized layer-one blockchain designed specifically for travel-related use cases. Their focus on creating a permissioned yet public infrastructure for travel industry stakeholders enables interoperable, secure digital identity systems.

Shared Vision for Individual Control

What makes this partnership particularly compelling is our shared commitment to individual-controlled identity. Camino Network's architecture, which emphasizes regulatory compliance while maintaining user privacy and control, demonstrates their dedication to building responsible identity solutions that put users first.

Technical Innovation Meets Real-World Application

Camino Network's technical capabilities feature sub-second transaction finality, high throughput capacity of 4,500 transactions per second, energy-efficient consensus mechanism, and strong security features.

These technical foundations provide an excellent platform for implementing decentralized identity solutions that can scale to meet the demanding needs of the global travel industry.

Looking Forward

Together with the Camino Network Foundation, we look forward to:

Collaborating on educational initiatives, including webinars, hackathons, and related events Investigating cutting-edge identity solutions for travel-focused use cases Establishing standards that align traditional travel industry needs with decentralized identity principles Delivering more efficient and secure digital identity management options for travelers Promoting the real-world adoption of decentralized identity in travel applications

The addition of the Camino Network Foundation to DIF strengthens our community's ability to deliver practical, scalable identity solutions. Their expertise in building industry-specific blockchain infrastructure, combined with their understanding of travel industry requirements, will be invaluable as we work together to advance the state of decentralized identity.

Get Involved

We encourage all DIF members to welcome the Camino Network Foundation and explore potential collaborations. Welcome aboard, Camino Network Foundation! We're excited to work together in building the future of digital identity.


Velocity Network

Results of 3rd Annual Elections to the Board of the Velocity Network Foundation

The Trust Framework is an indispensable aspect of the Velocity Network and one of its founding principles. It is critical to understand it to have a complete view of the Network and its goals and purposes. Please watch as Velocity Head of Product and Technology Andres Olave presents an overview of the Trust Framework in the context of the Velocity Network. The post Results of 3rd Annual Election

Blockchain Commons

Musings of a Trust Architect: How My Values Inform Design

ABSTRACT: By grounding technical decisions in ethical values, we can create compassionate digital architectures. This article examines how core human values such as dignity, autonomy, and human rights inform the design of trustworthy digital systems to enable progressive trust, safeguard privacy, promote individual choice, and build resilient systems resistant to coercion. As we enter 2025, I’m ref

ABSTRACT: By grounding technical decisions in ethical values, we can create compassionate digital architectures. This article examines how core human values such as dignity, autonomy, and human rights inform the design of trustworthy digital systems to enable progressive trust, safeguard privacy, promote individual choice, and build resilient systems resistant to coercion.

As we enter 2025, I’m reflecting on a journey of decades that has been dedicated to advancing privacy, security, and human autonomy in the digital age. My body of work dates back to the 1990s, which saw my early contributions with cryptographic pioneers and my co-authorship of the IETF TLS 1.0 standard. But this year marks the 10th anniversary of the first “Rebooting Web of Trust” workshop, which was a real milestone for my leadership role in shaping secure technologies such as Self-Sovereign Identity and the W3C Decentralized Identifiers standard.

Over the past decade, my focus as a trust architect has sharpened on designing digital systems that empower individuals while respecting core values such as autonomy and human dignity. These designs play a critical role in how individuals express themselves, engage with communities, and pursue their aspirations in a world increasingly shaped by digital interactions.

Yet, this digital realm presents a dual reality. While it opens up unprecedented opportunities, it also makes us increasingly vulnerable to exploitation, coercion, and pervasive surveillance. This tension places a profound responsibility on architects of digital systems: we must ensure that technical designs are guided by deeply rooted human values and ethical principles.

Looking ahead to the next ten years, I reaffirm my commitment to these values, charting a course for the future that places human flourishing and trust at the center of technological progress. But to fulfill this commitment requires the complex answer to a simple question: how can we design systems that uphold dignity, autonomy, and human rights?

The Core Values of Autonomy & Dignity

When we design digital systems, we’re not just creating technical specifications. We’re crafting spaces where people will live significant portions of their lives. To give them the ability to truly live and excel, we must give them automony: a digital system must empower individuals to control their own destinies within this digital realm. To do so, it must provide them with tools that:

Protect their data. Exercise control over their digital presence. Ensure freedom from coercion. Cultivate trust through direct, transparent & efficient peer-to-peer interactions. Facilitate interactions built on trust and agency. Enable meaningful participation in the digital economy. Support engagement that aligns with their values and priorities. Foster resilience against systemic vulnerabilities. Operate seamlessly across jurisdictions and political boundaries

(See my “Principles of Dignity, Autonomy, and Trust in Digital Systems” in the Appendix for a more extensive look at what I consider core values for digital system design.)

Providing individuals with digital autonomy is mirrored by the concept of digital dignity. A digital system that prioritizes dignity respects the individuality of its users and safeguards their right to privacy. It minimizes the data collected, provides clear and revocable consent mechanisms, and ensures that control remains in the hands of the user. A dignified system doesn’t simply protect; it fosters agency and participation, allowing individuals to thrive without fear of surveillance, discrimination, or exploitation.

Autonomy is also closely linked to the concept of trust. You must be able to know and trust your peers in order to truly have the autonomy to make meaningful decisions. This is where systems like progressive trust come in.

A system built on autonomy, dignity, and trust ultimately treats individuals as more than their administrative identities; it recognizes that individuals possess an ineffable core of self that transcends digital representation. The first principle of Self-Sovereign Identity, ‘Existence,’ upholds this kernel of individuality, affirming that any digital identity must respect and support the inherent worth of the person behind it.

To properly respect autonomy and dignity also requires careful attention to power dynamics and accountability. Distinct standards of transparency and privacy should address the power imbalances between individuals and institutions. Achieving this balance involves respecting individual privacy while enabling appropriate oversight of powerful institutions. We must protect the vulnerable while ensuring our larger administrative systems remain fair and just.

We must also address the crucial question: how do we make privacy-preserving technology economically accessible to everyone? Any autonomy-enabling digital system must balance individual and collective interests by supporting sustainable development of digital infrastructure while fostering individual economic sovereignty and resilience. We must reward contributions to shared resources, uphold autonomy and self-determination, and ensure equitable access to rights-preserving technologies. By protecting individual freedoms and enabling fairness, privacy can ultimately be a tool that encourages participation regardless of economic means.

Decentralized identity wallets offer an example of how to embody the characteristics of autonomy, dignity, and trust, while also considering issues such as privacy, balance, and accessibility. They empower individuals to securely prove their credentials (such as educational achievements or professional certifications) directly to peers, without relying on central authorities that could arbitrarily deny their accomplishments. Consider Maria, a small business owner living in a vibrant but economically challenged favela neighborhood in Buenos Aires, Argentina. Using a self-sovereign, decentralized identity wallet provided by the city, she is able to secure microloans without compromising her privacy, a triumph for both dignity and autonomy.

As for how these core values transform into the design principles of decentralized identity wallets: that’s the next question to address.

From Values to Design Principles

The translation of the core values of autonomy, dignity, and trust into concrete design principles shapes every aspect of trust architectures I build and guides me to specific technical choices:

Cryptographically secure, self-certifying identifiers that operate independently of central authorities. Local or collaborative key generation and management to keep control in users’ hands. Peer-to-peer protocols that resist centralized rent-seeking and walled gardens. Offline-first capabilities to prevent connectivity from becoming a point of coercion. Data minimization by default. Choices for elision and redaction to control what individuals share. Cryptographic selective disclosure to prevent unwanted correlation and tracking. Revocable permissions to ensure users retain ongoing control over their information. Zero-knowledge proofs or other systems that can balance privacy and accountability without enabling bad actors. Decentralized architectures, not as an ideological preference, but as a practical necessity.

The importance of these protections isn’t theoretical. My work examining sensitive data — including wellness, educational credentials, financial transactions, and identity documentation — has revealed how seemingly benign information can threaten human rights when misused. Health data can enable discrimination or coercion. Educational records can create permanent, unchangeable markers that limit opportunities. Financial and identity data can be weaponized to exploit or disenfranchise individuals.

A values-driven design can therefore be seen as not just an abstract focus on ideals such as autonomy, but protection against real-world harms. The rights to be forgotten, to correct errors, and to recover from systemic or administrative injustices ensure fairness in digital interactions. The ability for an individual to selectively share aspects of their identity protects from being reduced to digital records or confined to singular contexts.

From Design Principles to Education

Implementing human-centric design patterns reveals another challenge: helping developers to understand not just the technical complexity, but the human purpose behind each design choice. Developers must grasp not only how their systems operate but they must think critically about why their design decisions matter for privacy, autonomy, and dignity.

While technical resources such as documentation and tutorials are indispensable for this education, true progress depends on fostering a compasionate culture where developers internalize value-driven imperatives. This has led me to prioritize the cultivation of decentralized developer ecosystems rooted in collaboration, open development standards, and shared learning. I’ve done this through a variety of means:

Workshops that convene developers, policymakers, and advocates to share insights, collaborate, and explore innovative approaches. Hackathons and Sprints addressing pressing challenges in digital trust, enabling participants to co-create solutions in hands-on environments. Regular Developer Meetups for discussing current challenges, sharing practical experiences, and aligning on future roadmaps. Peer Review and Collaboration Forums to ensure transparency, accountability, and robust feedback in the development processes. Cross-Organization Coordination to facilitate collaborative projects, share resources, and distribute financial and time-related investments such as security reviews. Ecosystem Building to design decentralized solutions that balance individual empowerment with collective benefit, ensuring that all contributors — users, developers, and communities — derive meaningful value and that mutual respect is cultivated through shared goals and open participation. Mentorship Programs to guide emerging developers in adopting values-driven approaches, fostering ethical practices from the outset of their careers. Advocacy Efforts that include collaborating with policymakers and regulators to define a techno-social contract that upholds human dignity, ensures equitable and compassionate digital rights, and protects the interests of the vulnerable.

With this decentralized, collaborative approach to education, no single entity controls the evolution of these technologies. Instead, innovation is fostered across a diverse network of developers, building resilience into these systems and ensuring that solutions remain adaptable, inclusive, and accessible. This cooperative spirit reflects the very principles of autonomy, compassion, and inclusivity that underpin trustworthy digital systems.

From Education to Implementation

As communities evolve from educational groups to implementation groups, forums and discussions continue to expand the community and allow us to address the broader societal implications of technical choices. Foundational principles should follow.

The Ten Principles of Self-Sovereign Identity is an example of a set of foundational principles that directly evolved from discussion at an educational workshop (RWOT2). The Gordian Principles and Privacy by Demand are other examples of core principles that evolved out of earlier discussions. Principles such as these form a bedrock for the values we will work to embed in actual implementations.

Code reviews and project evaluations should then include these principles — and more generally ethical alignment — as a key criterion. They’re not just about technical correctness! By embedding values into every stage of development, we ensure that systems are designed to empower individuals, not exploit them.

How can we manage the critical balance between transparency for accountability and privacy for individuals? How do we address power dynamics and ensure systems protect the rights of the vulnerable while holding powerful entities accountable? Ultimately, how do we prioritize both user autonomy and security in decisions around data storage, key management, or cryptographic algorithms? These are questions that should both arise and be addressed when considering a full education-to-implementation pipeline that is based on collaboration and the consideration of values.

Ultimately, implementing systems that respect dignity and autonomy demands a new kind of techno-social contract. This contract must bridge multiple realms:

The technical capabilities that make solutions possible. The cultural shifts that make them acceptable. The economic incentives that make them sustainable. The political will that makes them viable. The contractual & legislative agreements that makes them durable.

This comprehensive approach will serve both individual autonomy and our collective commons.

By ensuring that digital trust and human dignity remain at the core of technological progress, we build systems that serve as a foundation for a more equitable, humane, and resilient digital future. The result is implementations that transcend technical excellence by instilling a sense of stewardship among developers. They become not just the creators of secure systems but also champions of the communities these systems serve.

From Implementation to Deployment

Any framework to support values such as autonomy, dignity, and trust must be holistic in its approach.

Technical standards and specifications must harmonize with cultural norms and social expectations. Economic models must simultaneously foster individual resilience and collective benefits, ensuring that privacy and autonomy remain accessible to everyone, and don’t become luxuries available only to the wealthy. Cultural norms and legislative efforts must go beyond surface-level privacy protections, addressing both the technical realities and human needs at stake. Most importantly, technical and political discourse must evolve to recognize digital rights as fundamental human rights. This paradigm shift would enable policies that support compassionate decentralized approaches while holding powerful actors accountable to the communities they serve.

Nurturing the collaborative ecosystems plays a central role in this transformation. We must foster cultures of ethical awareness not just among developers but across society. This means supporting implementers and maintainers who understand not just the “how” of our systems, but the “why”. It means engaging leaders who grasp both technical constraints and human needs and creating sustainable economic models that reward contributions to the commons while protecting individual rights.

Legal deployment has always been one of the trickiest challenges in popularizing a system that supports individual autonomy, but the concept of Principal Authority presents a promising foundation, courtesy of Wyoming’s digital identity law. It goes beyond the traditional frameworks of property and contract law, which, while useful, are insufficient in addressing the unique challenges of digital identity.

Property law focuses on ownership and control and contract law governs agreements between parties, but neither fully captures the dynamic, relational nature of digital representations or the need for individual agency in decentralized systems. Principal Authority, grounded in Agency Law, functions much like the relationship between a principal and an agent in traditional legal contexts. For instance, just as an agent (like a lawyer or real estate agent) acts on behalf of a principal while preserving the principal’s control, Wyoming’s digital identity law ensures that individuals retain ultimate authority over any actions or representations made on their behalf in the digital space. This legal framework acknowledges human agency — not mere ownership or contractual consent — as the primary source of legitimate authority. The result is a modern recognition of individual sovereignty, and therefore autonomy, that still fosters collaboration and commerce in the increasingly interconnected digital realm.

But, even if Principal Authority does prove a useful tool, it’s just one tool in a whole toolkit that will be necessarily to successfully deploy rights-supporting software into society.

Conclusion

My responsibility as a trust architect is not simply to build systems that work, but to build systems that work for humanity. This requires a steadfast commitment to values, a willingness to navigate difficult trade-offs, and a relentless focus on aligning design principles with human needs.

The technical challenges of implementing values-driven design are significant, but they’re challenges worth solving. When we build systems that respect human rights and dignity, we create digital spaces that enhance rather than diminish human flourishing.

As developers, policy makers, or advocates, we hold the power to embed human values into every line of code, every standard, and every policy. As we build tomorrow’s digital ecosystems, we must therefore ask: What can I do to make trust and dignity the foundation of our systems?

To answer that question in a positive way will ultimately require a multi-stakeholder effort where technologists, policy makers, and civil society collaborate to uphold principles of equity, inclusion, and transparency in all aspects of digital architecture, down the entire linked chain from values to design to education to implementation to deployment.

I hope you’ll be part of that undertaking.

Appendix 1: Principles of Dignity, Autonomy, and Trust in Digital Systems

While working on this article, I put together my own principles for dignity, autonomy, and trust in digital systems. As with my self-sovereign principles of a decade ago, I am offering these up for discussion in the community.

1. Human Dignity. Design systems that prioritize and respect the inherent dignity of every individual. Embed privacy protections, minimize data collection, and provide clear, revocable consent mechanisms that align with user empowerment. Protect individuals from harm while fostering compassionate digital environments that promote trust, human flourishing, and technological progress aligned with human-centric values, actively considering potential societal impacts and unintended consequences. 2. Autonomy & Self-Determination: Empower individuals to control their digital identities and make decisions free from coercion or undue influence. Enable them to manage their interactions, transact freely, preserve their sovereignty, act as peers not petitioners, and assert their rights through decentralized, compassionate, user-controlled systems. 3. Privacy by Design (& Default): Embed robust privacy protections into every system, implementing data minimization, selective disclosure, anti-correlation, and cryptographic safeguards as default practices. This ensures that users retain control over their information and remain shielded from tracking, correlation, and coercion. 4. Resilience Against Exploitation: Architect systems to withstand adversarial threats and systemic vulnerabilities. Leverage decentralization, cryptographic protections, and offline-first capabilities to empower users even in hostile and adversarial environments and to ensure autonomy remains intact under pressure. 5. Progressive Trust: Design systems that reflect the natural evolution of trust, enabling selective and intentional information sharing. Foster trust gradually through mutual engagement, avoiding premature commitments, unnecessary reliance on intermediaries, or imposed full disclosure. 6. Transparency & Accountability: Hold powerful institutions accountable while safeguarding individual privacy. Balance transparency with confidentiality to mitigate power imbalances, protect the vulnerable, and ensure justice and fairness in digital interactions. Ensure that innovation and system development prioritize fairness and compassionate considerations, holding powerful institutions accountable for societal impacts. 7. Interoperability: Foster systems that are interoperable across cultural, legal, and jurisdictional boundaries. Promote inclusivity by prioritizing open standards, decentralized infrastructures, and accessible tools that serve diverse communities while avoiding exclusivity or centralized gatekeeping. 8. Adaptive Design: Incorporate insights from Living Systems Theory, Ostrom’s Commons, and other governance and design models to build architectures that are dynamic, resilient, and capable of evolving alongside societal and technological changes. Emphasize adaptability through iterative growth, collective stewardship, and interoperability, balancing stability with flexibility to support sustainable and inclusive digital ecosystems. 9. A Techno-Social Contract: Bridge technical capabilities with cultural, economic, and legislative frameworks to create a sustainable, human and civil rights-preserving digital ecosystem. Recognize digital rights as fundamental human rights and align systems with shared values of autonomy, dignity, and collective benefit. 10. Ethics: Cultivate a culture of ethical awareness, critical thinking, and collaboration among developers, policymakers, and users. Ensure technical decisions align with principles of trust and dignity by embedding education, mentorship, and a commitment to shared responsibility in the development process. Encourage innovation that is mindful of societal impacts, fostering a development ethos that prioritizes responsibility and safeguards against unintended consequences. Appendix 2: Use Cases for Values Designs

Values affect all of my designs. Following is some discussion of how it’s influenced my work on self-sovereign identity and progressive trust.

Self-Sovereign Identity

The conviction that technical designs must be built on human values came into sharp focus for me in 2016 when I authored the 10 Principles of Self-Sovereign Identity. These principles were not born from technical specifications alone but from a deep commitment to dignity, autonomy, and human rights. Over time, those values have guided the development of technologies such as Decentralized Identifiers (DIDs), Verifiable Credentials (VCs), and the DIDComm protocol for secure, private communication. They have also influenced broader thinking around cryptographic digital assets such as Bitcoin. I have come to see these values not as abstract ideals but as the very foundation of trust itself: principles that must underpin every digital system we create.

My principles of Self-Sovereign Identity also had a strong historical basis: they were built on a deep historical and philosophical foundation. The concept of sovereignty has evolved over centuries — from feudal lords to city-states to nations — consistently reflecting a balance between autonomy and interconnection. When I wrote about the principle of “Control”, it was not about advocating absolute dominion but about framing sovereignty as the right to individual agency and prosperity, much like medieval cities, which preserved their independence while flourishing within broader networks of trade and diplomacy.

This understanding was deeply influenced by Living Systems Theory, which shows how every entity maintains its autonomy through selective boundaries while remaining part of a larger ecosystem. Just as a cell’s membrane allows it to control what passes in and out while still participating in the larger organism, digital identity must enable both individual autonomy and collective participation. This biological metaphor directly informed principles such as “Existence” and “Persistence,” which recognize that identity must be long-lived but also able to interact with its environment, and “Access” and “Portability”, which define how identity information flows across boundaries.

The principles also reflect Ostrom’s insights about managing common resources as well as feminist perspectives on sovereignty that emphasize agency over control. When I wrote about the principles of “Consent” and “Protection”, I was describing the selective permeability of these digital boundaries—not walls that isolate, but membranes that enable controlled interaction. “Interoperability” and “Minimization” similarly emerged from understanding how sovereign entities must interact while maintaining their independence and protecting their core rights.

These concepts culminate in the final SSI Principles such as “Transparency,” which balances individual autonomy with collective needs, and “Portability,” which ensures that identities can move and evolve just as living systems do. Each principle reflects this interplay between values and technical implementation, creating a framework where digital sovereignty serves human dignity. They weren’t meant to be an endpoint but rather a starting point for an evolving discussion about sovereignty in the digital age — one that continues to guide our work as we push the boundaries of what’s possible in digital identity, ensuring our innovations prioritize human needs rather than subordinating them to technology.

The technical complexity required to implement such systems is significant, but it serves a deeply human purpose: the ability to build autonomy and trust.

Progressive Trust

Trust is not static; it evolves over time — a concept I describe as progressive trust. This principle reflects how trust naturally develops between people and organizations, both in the physical and digital worlds. Relationships are built incrementally, through selective and intentional disclosures, rather than being imposed upfront or dictated solely by third-party intermediaries. This gradual evolution is essential for fostering genuine connections while mitigating risks.

I discovered this concept through years of observing how people actually build relationships. For instance, when meeting someone at a conference, we don’t immediately share our life story. Instead, we begin with small exchanges, revealing more information as comfort, context, and mutual understanding grow. Digital systems must mirror this natural evolution of trust, creating environments that respect psychological needs and empower individual agency.

A well-designed system transforms these ideas about progressive trust into deployable systems by enabling users to disclose only what is necessary at each stage, while retaining the ability to refine or revoke permissions as relationships deepen, change, or dissolve. This flexibility demands advanced technical solutions, such as:

Sophisticated cryptographic protocols that enable selective and intentional disclosure. Relationship-specific identifiers to ensure contextual privacy. Mechanisms to prevent unwanted tracking or correlation. Tools that balance transparency with security, safeguarding trust while avoiding vulnerabilities that could undermine it.

The technical complexity required to implement such systems is significant, but it serves a deeply human purpose: enabling individuals to build trust incrementally, naturally, and on their own terms.

Knowing the values we are aligning with from the start helps to define this sort, even (as with progressive trust) when it’s hard. The result is an architecture that not only reflects the organic nature of human relationships but also upholds autonomy, fosters confidence, and protects against coercion or exploitation.

Monday, 06. January 2025

Elastos Foundation

Elastos Announces Arbiter Network for BeL2 Protocol, Opening a New Era of Non-Custodial BTC Finance

Arbiter Nodes Provide Time-Based Services and Dispute Resolution, Allowing Bitcoin to Remain on Mainnet While Tapping into Smart Contracts Across EVM Blockchains  Building on its vision of fully decentralized financial services powered by Bitcoin, Elastos today announced the public beta release of its Arbiter Network for the Bitcoin-Elastos Layer 2 protocol, BeL2. This step marks […]

Arbiter Nodes Provide Time-Based Services and Dispute Resolution, Allowing Bitcoin to Remain on Mainnet While Tapping into Smart Contracts Across EVM Blockchains

 Building on its vision of fully decentralized financial services powered by Bitcoin, Elastos today announced the public beta release of its Arbiter Network for the Bitcoin-Elastos Layer 2 protocol, BeL2. This step marks a major milestone in the growth of the BTCFi ecosystem, making it possible to secure BTC-backed loans, stablecoins, and other advanced smart contract solutions without ever relocating Bitcoin off the main network.

Developed by Elastos (ELA)—an early mover in SmartWeb technologies—the BeL2 protocol sets up a trustless clearing network that sends proofs rather than assets. Through these cryptographic proofs, smart contracts on EVM-compatible blockchains can verify that BTC remains locked on Bitcoin’s mainnet and use it as collateral. With the introduction of Arbiter nodes, developers and users now gain access to time-based transaction oversight and decentralized dispute resolution—features that clear the path for a new wave of decentralized finance built on Bitcoin’s security.

“The Arbiter Network is the final piece in our BeL2 infrastructure puzzle,” said Sasha Mitchell, Head of Operations at Elastos. “With Arbiter nodes providing trustless oversight and time-based services, we can offer a fully decentralized BTC finance platform—one that builds on Bitcoin’s resilience without depending on centralized custodians.”

How the BeL2 Arbiter Network Works

Under the surface, BeL2 keeps Bitcoin anchored to its original chain while enabling a range of financial operations elsewhere. First, users lock their BTC using dedicated mainnet scripts. This non-custodial model ensures that Bitcoin never needs to be wrapped or moved, preserving its core security properties and owner independence. Once locked, Zero-Knowledge Proofs (ZKPs) affirm the collateral status, allowing external networks to confirm how much BTC is involved without revealing private transactional data.

These ZKPs pass through a decentralized oracle service that conveys proof details—rather than assets—into EVM-based smart contracts. By transferring cryptographic confirmations instead of tokens, BeL2 avoids the pitfalls linked to wrapped BTC. Meanwhile, the newly released Arbiter Network oversees loan terms, coordinates time-based tasks, and resolves any disputes. Arbiter nodes pledge Elastos (ELA)—a reserve currency merge-mined with Bitcoin with up to 50% of its security—to uphold network reliability, earning ELA and BTC fees in return for their role.

This approach offers distinct advantages. Users can be confident that their secured Bitcoin stays under their control on its original chain. Arbiter nodes jointly validate transactions in a decentralized manner, creating a fair environment for everyone. Because the proofs rely on Zero-Knowledge methods, the system preserves strong security and privacy. By supporting EVM-based smart contracts, BeL2 also unlocks wide-ranging DeFi possibilities—spanning simple lending scenarios to more advanced stablecoin setups and beyond.

BeL2 Arbiter Beta Release Details

The beta release of the Arbiter Network will progress in stages, starting with reduced collateral limits to maintain stability and gather input from initial participants. During this phase, Arbiters can stake a small amount of ELA or ELA-based NFTs, ensuring enough protection to support fair procedures while the network is put to the test. At first, rewards will be distributed in ELA, with BTC fee structures planned for future updates. A user-friendly dispute resolution portal lets Arbiters track events, approve or challenge transactions, and honor time-based constraints for overall reliability.

“By blending Bitcoin’s security with Elastos’ scalable foundations, we’re establishing a new financial model—a decentralized bank of sorts, powered by code and cryptography,” said Sasha Mitchell. “Our broader aim is a smooth, global financial web that remains anchored to Bitcoin’s trust.”

Broader Implications for BTCFi

By directly connecting each DeFi transaction to Bitcoin’s mainnet, BeL2 removes the need for wrapped BTC setups, lessening complexity and eliminating custodial points. This breakthrough supports various use cases: non-custodial loans, stablecoins that can be redeemed at will, and decentralized trading markets offering both spot and derivative BTC products. With Arbiter nodes providing reliable governance and resolving disputes, the broader BeL2 sphere gains a unified system for organizing multi-party transactions with fairness—an advancement that places Bitcoin at the core of the future’s decentralized financial world.

Additional Information Learn More about BeL2  Join the Arbiter Beta Contact info@elastos.org for partnership inquiries or media requests. About Elastos

Elastos is a SmartWeb ecosystem builder focused on enabling decentralized application creation and cross-chain connectivity. Built on top of Bitcoin merge-mining, Elastos relies on the security of the world’s largest public blockchain and extends it with additional layers. The introduction of BeL2 and its Arbiter Network marks Elastos’ latest effort to advance a more open, clear, and trustless global financial system.

 

Friday, 03. January 2025

Project VRM

Derailing the Customer Journey

This came in the mail today: Everything they list is something I don’t want to do. I’d rather just accumulate the miles. But I can’t, unless I choose one of the annoyances above, or book a flight in the next three months. So my customer journey with American is now derailed. There should be better […]

This came in the mail today:

Everything they list is something I don’t want to do. I’d rather just accumulate the miles. But I can’t, unless I choose one of the annoyances above, or book a flight in the next three months.

So my customer journey with American is now derailed.

There should be better ways for customers and companies to have journeys together.

Hmm… Does United have one?

Here’s a picture of my customer journey with United Airlines, as of today:

I’m also a lifetime member of the United Club, thanks to my wife’s wise decision in 1990 to get us both in on that short-lived deal.

Premier Platinum privileges include up to three checked bags, default seating in Economy Plus (more legroom than in the rest of Economy), Premium lines at the ticket counter and Security, and boarding in Group One. There are more privileged castes, but this one is a serious tie-breaker against other airlines. Also, in all our decades of flying with United, we have no bad stories to tell, and plenty of good ones.

But now we’re mostly based in Bloomington, Indiana, so Indianapolis (IND) is our main airport. (And it’s terrific. We recommend it highly.) It is also not a hub for any of the airlines. The airline with the most flights connecting to IND is American, and we’ve used them. I joined their frequent flier program, got their app, and started racking up miles with them too.

So here is one idea, for every airline: having respect for one’s established status with other airlines means something. Because that status (or those stati) are credentials: They say something about me as a potential passenger. It would be nice also if what I carry, as an independent customer, is a set of verifiable preferences—such as that I always prefer a window seat, never tow a rolling bag on board (I only have a backpack), and am willing to change seats so a family can sit together. Little things that might matter.

I bring all this up because fixing “loyalty” programs shouldn’t be left up only to the sellers of the world. They’ll all do their fixes differently, and they’ll remain deaf to good input that can only come from independent customers with helpful tools of their own.

Developing those solutions to the loyalty problem is one of our callings at ProjectVRM. I also know some that are in the works. Stay tuned.

Thursday, 02. January 2025

Elastos Foundation

TeamELA.org: A Bitcoin-Secured Digital Reserve Asset Portal

ELA exists to bring together the strongest elements of Bitcoin’s security approach with a flexible blockchain structure. By making use of merge mining, ELA draws on Bitcoin’s hashrate while enforcing a limit of 28.22 million tokens. In this way, ELA adopts Bitcoin’s protective strength at a small fraction of its energy usage, shaping ELA into […]

ELA exists to bring together the strongest elements of Bitcoin’s security approach with a flexible blockchain structure. By making use of merge mining, ELA draws on Bitcoin’s hashrate while enforcing a limit of 28.22 million tokens. In this way, ELA adopts Bitcoin’s protective strength at a small fraction of its energy usage, shaping ELA into a “Bitcoin-Secured Reserve Asset.”

Security and scarcity define a blockchain’s worth. Bitcoin’s remarkable success is built on its proof-of-work protocol and a cap of 21 million coins. ELA follows that model: it anchors its proof-of-work to Bitcoin’s extensive hashrate, while its own final supply remains fixed. This approach builds confidence in ELA’s underlying economy. Bitcoin is almost impossible to compromise, and by extension, ELA gains that same resistance.

TeamELA.org, built by Infinity under team member Sasha Mitchell, is an educational hub for ELA which brings together merge-mining metrics, supply and value graphs, educational animations and exchange and wallet links for ELA holders—turning complex ideas into a clear set of tools. TeamELA.org connects to the Elastos Explorer and Minerstat in real time, revealing how ELA’s share of Bitcoin’s hashrate compares to its overall supply. By adding CoinGecko‘s price data, the site shows how ELA’s security links with tangible value, giving users a clear metric for judging its potential.

It also serves as a straightforward route to ELA on exchanges—both centralized (Coinbase, KuCoin, Gate.io, Huobi) and decentralized (Uniswap, Chainge Finance, Glide Finance)—eliminating the need to search for obscure markets. Meanwhile, quick wallet downloads for iOS or Android and a short “How to stake ELA” guide help newcomers store or stake their tokens within minutes.

Beyond these essentials, TeamELA.org offers interactive visuals to demonstrate how Bitcoin’s proof-of-work runs hand-in-hand with ELA, emphasizing the efficiency of merge mining. A supply timeline highlights how ELA issuance winds down by 2105, capped at 28.22 million. There’s also a value calculator, outlining how a slice of Bitcoin’s mining income might feed into ELA’s core worth. Together, these features deliver a complete view of ELA’s security, supply, and utility in one cohesive online hub.

Behind the scenes, TeamELA.org relies on custom API hooks for data retrieval, frequently pull stats, such as Elastos and Bitcoins latest block hashrate, price, and supply figures stay current. Animations illustrate how ELA’s protocol uses Bitcoin’s calculations, emphasizing that ELA benefits from Bitcoin’s security with no added energy load.

Bitcoin-Level Security: By gaining a noteworthy share of Bitcoin’s Exahashes per second, ELA surpasses other altcoins that operate on smaller pools. TeamELA.org computes on the fly to show ELA’s strong security ratio. Fixed Supply: Topped at 28.22 million tokens, ELA reflects Bitcoin’s scarcity yet sets its own limit. A clear halving roadmap on the Supply page details its release steps—another reason some see ELA as akin to “digital gold.” Reduced Energy Use: Merge mining calls for no extra machinery. Bitcoin miners can solve ELA blocks alongside Bitcoin blocks with the same proof-of-work, lowering power demands while boosting overall safety. Decentralized Ecosystem: Elastos began as a wide-ranging Web3 platform, offering decentralized apps and services that draw on ELA’s heightened protection. As the network grows, ELA’s significance as the “fuel” for dApps rises, backed by Bitcoin’s level of assurance. Single-Stop Convenience: The portal merges stats, how-to resources, and integrated trading/wallet options in one location. New arrivals only need this one site to discover, acquire, and manage ELA.

OPEN-SOURCE CODE
In line with decentralized values, TeamELA.org’s repository is open to all. Anyone may review how the platform sources data from the Elastos Explorer or tracks price details from CoinGecko. Developers are free to examine, refine, or enhance each element. Providing open code encourages transparency:

Security: The community can detect and address vulnerabilities. Collaboration: Engineers around the world can suggest changes or new tools. Education: Enthusiasts can observe how raw blockchain figures become user-friendly dashboards.

ELA’s primary strength grows from pairing Bitcoin’s extensive hashrate with a strict supply limit as per Satoshis merge-mining vision shared here and here. This grants ELA the benefits of reliable proof-of-work at a lower energy cost—something uncommon among altcoins. By visiting TeamELA.org, users gain a clear rundown of ELA: side-by-side hashrate insights, supply data, paths to buy, and options to stake or store tokens.

Plenty of projects claim decentralization, but ELA directly connects with Bitcoin’s security approach, widely regarded as the most tested in the industry, supporting a network that remains resource-friendly and strong. This transparent method builds trust every step of the way, capturing ELA’s core purpose: harness Bitcoin’s raw power, maintain a fixed token count, and invite the public to participate in a new era of decentralized applications anchored by a proven proof-of-work foundation. What’s more, TeamELA.org is a portal to support CoinTelegraph in their upcoming research report on ELA, passed by the Cyber Republic in Proposal 176. Did you enjoy this article? To learn more, follow Infinity for the latest updates here!

 


DIF Blog

DIF Newsletter #47

January 2025 DIF Website | DIF Mailing Lists | Meeting Recording Archive Table of contents Decentralized Identity Foundation News; 2. Working Group Updates; 3. Open Groups; 4. Announcements at DIF; 5. Community Events; 6. DIF Community; 7. Get involved! Join DIF 🚀 Decentralized Identity Foundation News DIF Labs Launches Beta Cohort DIF&

January 2025

DIF Website | DIF Mailing Lists | Meeting Recording Archive

Table of contents Decentralized Identity Foundation News; 2. Working Group Updates; 3. Open Groups; 4. Announcements at DIF; 5. Community Events; 6. DIF Community; 7. Get involved! Join DIF 🚀 Decentralized Identity Foundation News DIF Labs Launches Beta Cohort

DIF's commitment to accelerating decentralized identity innovation takes a major step forward with the launch of DIF Labs' Beta Cohort. This new initiative brings together leading projects in Bitcoin Ordinals, Linked Claims, and privacy-preserving verification through VerAnon. Learn how DIF Labs is transforming how decentralized identity solutions are built, tested and scaled at DIF Labs: DIF Launches Beta Cohort.

Thank you to DIF Labs Chairs Andor Kesselman, Ankur Banerjee, and Daniel Thompson-Yvetot for their tremendous stewardship in building the Labs community, which will be a major focus for DIF in 2025!

Official Adoption of BBS Blind Signatures and Pseudonym Specifications by the CFRG

Major progress in privacy-preserving credentials as BBS Blind Signatures and BBS Pseudonyms specifications are officially adopted by the Crypto Forum Research Group (CFRG). These developments bring us closer to standardized, privacy-protecting digital credentials. Read about BBS and how to get involved at BBS: Where Proof Meets Privacy.

DIF Technical Leaders Engage Korean Students

DIF extends its educational outreach in Asia as Markus Sabadello and Kyoungchul Park delivered an engaging session on decentralized identity technologies at Seoul's MegaStudy academy. The lecture covered DIDs, VCs, and digital wallets, demonstrating DIF's commitment to nurturing the next generation of identity technologists. Read the full story at DIF Technical Leaders Engage Korean Students.

🛠️ Working Group Updates 📓 DID Methods Working Group

Welcomed new co-chairs Jonathan Rayback and Matt McKinney. Made progress on selection criteria for DID methods, working to refine criteria list and gather method proposals. Group developing template for proposals and planning process to evaluate DID methods. Next meeting set for January 15th to continue work.

DID Methods Working Group meets bi-weekly at 9am PT/ noon ET/ 6pm CET Wednesdays

💡Identifiers and Discovery Working Group

Discussed DID traits specification and potential IPFS improvements. Team explored using IPFS as a DID method and how to enhance file identification. Discussed document controller properties and DID method comparisons.

Advanced work on DID Web VH specification, focusing on key rotation, version management and revocation approaches. Discussed version ID formats and query parameters. Team preparing to finalize v0.5 of specification.

Identifiers and Discovery meets bi-weekly at 11am PT/ 2pmET/ 8pm CET Mondays

🪪 Claims & Credentials Working Group

Planning for January 28th session on age verification and credentials standardization. The session will cover proof of age schema, data models, and practical applications like age-restricted content, senior services, and substance purchasing verification. Register here to attend

The Credential Schemas work item meets bi-weekly at 10am PT / 1pm ET / 7pm CET Tuesdays

🔐 Applied Crypto Working Group

BBS+ specs CFRG adoption call succeeded! Exploring range proofs and verifiable encryption concepts.

The DIF Crypto - BBS work item meets weekly at 11am PT/2pm ET /8pm CET Mondays

🧪 DIF Labs Working Group

Beta cohort projects progressing well - discussed EUDI wallet's new payment features, the Linked Trust project development, and connecting mentors with specific expertise to projects. Planning for mid-February presentations.

The Credential Schemas work item meets monthly on the 3rd Tuesday at 8am PT / 11am ET / 5pm CET

If you are interested in participating in any of the Working Groups highlighted above, or any of DIF's other Working Groups, please click join DIF.

📖 Open Groups at DIF DIDComm User Group

Reviewed outcomes from recent DIDComm interopathon, though participation was limited to two organizations. Identified issues with DIDCOMM Demo around multi-key implementation and JWK support. Proposed moving to bi-weekly meetings with alternating times to accommodate different time zones. Planning another interopathon for Q1 2025.

Meetings take place weekly on Mondays at noon PST. Click here for more details

Veramo User Group

Meetings take place weekly on Thursdays, alternating between Noon EST / 18.00 CET and 09.00 EST / 15.00 CET. Click here for more details

🌏 APAC/ASEAN Discussion Group

The DIF APAC call takes place Monthly on the 4th Thursday of the month. Please see the DIF calendar for updated timing.

🌍 DIF Africa

Meetings take place Monthly on the 3rd Wednesday at 1pm SAST. Click here for more details

🌍 DIF Japan SIG

Focused on interoperability discussions from recent Taipei conference. Covered integration with existing systems and digital identity platform shifts. Discussed Taiwan's approach to decentralized models.

Meetings take place on the last Friday of each month 8am JST. Click here for more details

🌍 DIF Hospitality & Travel SIG

Made progress on standardizing data formats, internationalization, and location services. Working on documentation for C-suite audiences and implementation guides. Developing approaches for currency handling and regional variations in user profiles.

Meetings take place weekly on Thursdays at 10am EST. Click here for more details

📢 Announcements at DIF Proof of Age Workshop (Webinar)

Join us on January 28th at 10am PT for an important discussion on proof of age solutions and age-related verifiable credentials. The session will explore DIF's CC working group's initiatives around implementing privacy-preserving age verification for various use cases including:

Age-related discounts and benefits Minor protection and age-gating Access control for age-restricted products and services Coordinating with related industry efforts

This collaborative session aims to bring together stakeholders to share insights and align efforts in building standardized, privacy-respecting age verification solutions.

Register here

🗓️ ️DIF Community DIDComm: Securing Industry 4.0 Communications

Read Dr. Carsten Stöcker's insightful analysis of how DIDComm can address critical security vulnerabilities exposed by the recent Salt Typhoon attacks. Learn how DIDComm's end-to-end encryption and perfect forward secrecy enable secure machine-to-machine and business-to-business communications for Industry 4.0. Read more

👉Are you a DIF member with news to share? Email us at communication@identity.foundation with details.

New Member Orientations

If you are new to DIF join us for our upcoming new member orientations. Please subscribe to DIF’s eventbrite for upcoming notifications on orientations and events.

🆔 Join DIF!

If you would like to get in touch with us or become a member of the DIF community, please visit our website or follow our channels:

Follow us on Twitter/X

Join us on GitHub

Subscribe on YouTube

🔍

Read the DIF blog

New Member Orientations

If you are new to DIF join us for our upcoming new member orientations. Find more information on DIF’s slack or contact us at community@identity.foundation if you need more information.

Sunday, 29. December 2024

Kantara Initiative

Are you ready for the new EU DORA regulations?

The EU’s Digital Operational Resilience Act (DORA) is set to take effect in January 2025. its aim is to ensure that companies and institutions active in the EU financial sector […] The post Are you ready for the new EU DORA regulations? appeared first on Kantara Initiative.

The EU’s Digital Operational Resilience Act (DORA) is set to take effect in January 2025. its aim is to ensure that companies and institutions active in the EU financial sector […]

The post Are you ready for the new EU DORA regulations? appeared first on Kantara Initiative.

Monday, 23. December 2024

FIDO Alliance

2024 FIDO Alliance Seoul Public Seminar: Unlocking a Secure Tomorrow with Passkeys

The FIDO Alliance’s Seoul Public Seminar was held on December 10, 2024, at the SK Telecom Pangyo Office. The theme for this milestone event was Unlocking a Secure Tomorrow with […]

The FIDO Alliance’s Seoul Public Seminar was held on December 10, 2024, at the SK Telecom Pangyo Office. The theme for this milestone event was Unlocking a Secure Tomorrow with Passkeys and the event attracted nearly 200 attendees. The seminar gave professionals a chance to share the latest developments and implementations of simpler and stronger online authentication technology with passkeys.

Watch the Recap Video

The seminar featured a dynamic mix of global and local case studies and offered a comprehensive overview of Passkey/FIDO and FDO (FIDO Device Onboard) implementations. Here are some key highlights:

FIDO Alliance Update: Andrew Shikiar (Executive Director & CEO of the FIDO Alliance) announced the launch of Passkey Central, a resource hub offering guidance on implementing passkeys for consumer sign-ins. The site is now available in Korean, Japanese, and English. What’s New with Passkeys on Google Platforms?: Eiji Kitamura (Developer Advocate at Google) discussed recent passkey advancements, including Android’s Credential Manager API and broader passkey support on Google platforms. From Passwords to Passkeys: The TikTok Passkey Journey: XK (Sean) Liu (Technical Program Manager at TikTok) shared how the TikTok platform adopted passkeys for both enterprise and consumer services. Secure Smart TV Authentication with Passkeys: Min Hyung Lee (Leader of the VD Business Security Lab at Samsung Electronics) demonstrated how passkeys enhance smart TV user authentication and outlined the future for this technology. FIDO in eCommerce: Mercari’s Passkey Journey: Naohisa Ichihara (CISO at Mercari) detailed the company’s motivations, challenges, and strategies for mitigating phishing risks through passkey adoption within the C2C marketplace.

The 2024 Seoul Public Seminar also featured an exciting and interactive segment: the FIDO Quiz Show. Designed to engage attendees while reinforcing key learnings, the quiz brought an additional layer of fun and competitiveness to the event.

How it worked:

Session Pop Quizzes: After each seminar session, key takeaways were tested through pop quizzes. Attendees who answered correctly were rewarded with FIDO Security Keys, generously supported by Yubico.

Real-Time Quiz Show: At the end of the event, a live quiz show engaged all attendees. By scanning a QR code, participants could join in and compete for prizes. Eunji Na from TTA emerged as the top scorer and won a Samsung Galaxy Smartphone!

Think you know FIDO Alliance and passkeys? Test your knowledge with the same 15 quiz questions (in Korean) by scanning the QR code in the image below.

The seminar gained significant local media attention from outlets such as IT Daily, DailySecu, Byline Networks, Datanet, BoanNews, eDaily, and Korea Economic Daily. Coverage highlighted the launch of Passkey Central, emphasizing its potential to accelerate passkey adoption and reduce reliance on passwords.

We extend a heartfelt thanks to all speakers, including Kieun Shin and Hyungchul Jung (Co-Vice Chairs of the FIDO Alliance Korea Working Group), Heungyeol Yeom (Emeritus Professor at Soonchunhyang University), Jaebeom Kim (TTA), Yuseok Han (AirCuve), Heejae Chang and Keiko Itakura (Okta), Junseo Oh (Ideatec), and Simon Trac Do (VinCSS) for their invaluable contributions.

We also express our gratitude to our sponsors, whose support made this year’s Seoul Public Seminar a resounding success.

Proudly Sponsored by:

Friday, 20. December 2024

FIDO Alliance

Business Reporter: Addressing the bias issue in biometrics

Bias in biometric identity systems still exists, but it is manageable, argues Andrew Shikiar at the FIDO Alliance When you unlock your smartphone, open your bank app, or approve a […]

Bias in biometric identity systems still exists, but it is manageable, argues Andrew Shikiar at the FIDO Alliance

When you unlock your smartphone, open your bank app, or approve a purchase on your laptop, you are using biometric authentication. It is such an unconscious part of our daily lives that if you blink, you might miss it.

It’s no wonder that biometrics are popular with consumers—they’re convenient and secure. Recent FIDO research found that consumers want to use biometrics to verify themselves online more, especially in sensitive use cases like financial services, where one out of two people said they would use biometric technology (48%). In fact, in the FIDO Aliance’s latest online barometer survey, consumers ranked biometrics as the most secure and preferred way to log in by consumers.

But for consumers, governments and other implementers, there is still a lingering ‘elephant in the room’ that continues to disrupt adoption: bias.

Should we worry about bias in biometrics?

FIDO Alliance’s research, Remote ID Verification – Bringing Confidence to Biometric Systems Consumer Insights 2024, found that consumers are concerned about bias in biometric facial verification systems– while the majority of consumers (56%) felt confident face biometrics systems could accurately identify individuals, a number still had concerns around discrimination present in some systems.

Concern surrounding the accuracy of biometric systems in processing diverse demographics has been developing in recent years. In the UK in 2021, for example, Uber drivers from diverse ethnic backgrounds took legal action over claims its software had illegally terminated their contracts as its software was unable to recognise them.

While the struggle of Uber drivers is just one example that underscores the issue, this problem is affecting people of colour and other underrepresented demographics more broadly—FIDO’s research found that one in four respondents feel they experience regular discrimination when using automated facial biometric systems (25%).

Feelings of discrimination and bias in facial recognition systems impact the entire user experience and erode faith in the technology overall. Half of British consumers in the survey said they would lose trust in a brand or institution if it were found to have a biassed biometric system, and 22% would stop using the service entirely.

It’s clear why organisations like governments and banks would worry about these hard-hitting reputational and trust risks. Despite biometrics being widely accepted as a more convenient and highly secure technology, the small number of systems that aren’t as accessible are leaving an air of concern that is slowing down more mainstream adoption.

Addressing bias in facial verification  

The most important thing to note is that not all biometric systems are created equal. Currently, testing levels are done on a case-by-case basis for each organisation, which is both costly and time-consuming, with varying definitions of what “good” looks like.

Based on proven ISO standards and developed by a diverse, international panel of industry, government, and identity experts, FIDO Alliance’s new Face Verification Certification program brings the industry’s first independent certification to market to build trust around biometric systems’ performance.

The certification assesses a face verification system’s performance across different demographics, including skin tone, age, and gender, in addition to far more wide-reaching security and performance tests.

The intensive security and liveness testing also verify that a provider’s face verification system can accurately confirm identities are real and authenticating in real-time, keeping threats like identity theft and deepfakes at bay. This is especially important for the most common use cases of face verification, like creating secure accounts, authenticating users, recovering accounts, and resetting passwords.

The beauty of independent certification is it sends a clear signal to consumers, potential clients, and auditors that the technology has been independently tested and is ready for both commercial and government use. It’s about building trust and showing that the provider takes security and fairness seriously.

More broadly, certification and independent global testing spark innovation and boost technological adoption. Whether you’re launching an identity verification solution or integrating it into regulations, open standards and certification provide a clear performance benchmark. This streamlines efforts, boosts stakeholder confidence and ultimately enhances the performance of all solutions on the market.

The future of identity 

As the way we verify digital identities keeps evolving and demand to prove who we are remotely increases, biometric systems must be independently verified and free from bias. All technologies rolled out to this scale need to be fair and reliable for everyone.

The FIDO Alliance’s program demonstrates solution providers are serious about making sure biometric identity verification technologies are trustworthy, secure, and inclusive for all users. It’s like having a gold star or a seal of approval that says, “Hey, you can trust this system to be fair and safe.”

Biometrics for online identity verification is not just a promising concept; it’s rapidly becoming a practical necessity in today’s increasingly digital world. They’re ready for implementation across various industries. With independent certification, organisations can jump over the final hurdle to widespread adoption, empowering a future of more seamless, digital and remote identity.

Wednesday, 18. December 2024

Digital ID for Canadians

Spotlight on Facephi

1. What is the mission and vision of Facephi? Facephi’s mission is to create seamless, trustworthy digital identity experiences that prioritize security, privacy, and compliance.…

1. What is the mission and vision of Facephi?

Facephi’s mission is to create seamless, trustworthy digital identity experiences that prioritize security, privacy, and compliance. ​We enable businesses to transform by connecting users to the digital resources they need efficiently and safely—whether as employees, partners, or consumers. Through our advanced identity verification technology, we simplify and secure the access of people ​to essential digital assets and services, ensuring that organizations worldwide can thrive in a digital-first world.

Facephi envisions a future where secure digital identity is at the heart of every interaction, seamlessly linking people, applications, services, and data. We aspire to be the foundation that supports and protects each digital connection, enabling individuals and organizations alike to navigate a secure digital world with assurance and confidence. ​We believe in a future where every identity and every access point is safeguarded by robust, transparent digital identity infrastructure.

2. Why is trustworthy digital identity critical for existing and emerging markets?

In today’s world, digital identity is essential to secure and scalable digital engagement. ​As more sectors—finance, healthcare, travel, and others—move toward online services, secure and trusted digital identity becomes critical. ​

Traditional perimeter-based security models no longer apply effectively, especially with the rise of cloud computing. For this reason, the “”Identity-First Security”” model has emerged as the most viable framework for protecting digital assets. ​

Our solutions help organizations transition to a robust, decentralized, and identity-centric security model. The convergence of secure authentication, data protection, and privacy compliance represents a necessary paradigm shift, particularly for emerging markets ​where secure and equitable digital access can drive significant economic growth.​

3. How will digital identity transform the Canadian and global economy? How does your organization address challenges associated with this transformation?

Digital identity is a foundational element that powers global digital commerce, enabling access to essential services securely and universally.​ With secure digital identity, both individuals and organizations can engage in borderless business from anywhere in the world. ​Facephi facilitates this transformation by enabling secure identity verification that supports secure digital access at scale. ​As we build identity ecosystems that are interoperable, user-centered, and privacy-respecting, we support the global economy’s digital shift by ensuring seamless and secure interactions across borders. ​This shift has the potential to unlock access to vital services, foster trust across regions, and create an inclusive digital economy.

Facephi addresses the complex challenges of digital identity through an adaptable, interoperable approach to identity management. ​Our technology supports multiple identity roles—Issuer, Holder (Wallet), and Verifier—alongside a Trust Registry, enabling us to provide secure identity solutions at every step. ​We are aligned with standards like mDOC (ISO 18013), W3C Verifiable Credentials, and SD-JWT, ensuring compatibility with global frameworks. ​By focusing on interoperability and secure frameworks, we help organizations establish the trust and scalability needed for broad digital identity adoption. ​Our approach encompasses both the issuance and verification of credentials, combining compliance with innovative solutions to ensure secure, accessible, and user-centered digital identity management.

4. What role does Canada have as a leader in this space?

Canada has an essential role to play as a leader in secure digital identity, supporting ​both regulatory frameworks and technological standards that foster trust and innovation. ​As Canadians increase their reliance on digital services, it is critical to ensure the security and integrity of digital identity systems. ​By establishing standards for trusted digital architecture, Canada can help shape a secure, transparent ecosystem that enables users to control access to their personal information with precision and confidence. ​Canada’s commitment to a trustworthy digital identity infrastructure will set an example globally and drive progress in secure, interoperable identity systems.

5. Why did your organization join the DIACC?

Facephi joined the DIACC to collaborate with leading organizations ​in advancing secure, user-friendly digital identity standards. ​We share DIACC’s commitment to building a trusted framework ​that empowers people, businesses, and governments to interact safely online. ​By participating in DIACC, we contribute to and benefit from a collaborative approach to developing a secure digital identity framework that respects user privacy, ensures interoperability, and promotes innovation.

6. What else should we know about your organization?

Facephi is a global leader in digital identity verification and authentication, providing technology that enables secure, user-friendly access across industries. ​Our platform supports a wide range of digital identity solutions, from secure identity verification and authentication to credential issuance and verification, adhering to standards like OID4VCI for credential issuance and OID4VP for credential presentation. ​Our solutions address interoperability, Trust Frameworks, and compliance with global digital identity standards, providing a robust foundation for organizations pursuing digital transformation. ​Through advanced technology and strategic partnerships, Facephi is shaping the future of secure digital identity and enabling seamless, trusted interactions in an increasingly digital world.


Next Level Supply Chain Podcast with GS1

E-Commerce Made Easy: Starting and Scaling Your Online Business

Modern technology makes starting an online business easy. However, that also means stiffer competition.  How can aspiring entrepreneurs succeed in the world of e-commerce? In this episode, Jesse Ness, at Ecwid by Lightspeed, joins hosts Reid Jackson and Liz Sertl to discuss the essential steps and common pitfalls of starting and growing an online business. They discuss ways high-qu

Modern technology makes starting an online business easy. However, that also means stiffer competition. 

How can aspiring entrepreneurs succeed in the world of e-commerce? In this episode, Jesse Ness, at Ecwid by Lightspeed, joins hosts Reid Jackson and Liz Sertl to discuss the essential steps and common pitfalls of starting and growing an online business. They discuss ways high-quality imagery, detailed product descriptions, and social media engagement can help your store stand out. Jesse also shares insights on emerging market trends like live selling and community engagement.

 

In this episode, you’ll learn:

How storytelling helps brands stand out in a crowded e-commerce market

The first steps to setting up a successful online store

Tips to overcome growth plateaus and how to scale your business effectively

 

Jump into the conversation:

(00:00) Introducing Next Level Supply Chain

(01:43) Online selling with Ecwid

(04:02) How to set up an online store

(08:27) Share your brand story

(12:11) Why entrepreneurs give up too soon

(17:10) The rise of live selling and other e-commerce trends

(22:16) Jesse Ness’ favorite tech

(24:34) Using AI to enhance daily life

 

Connect with GS1 US:

Our website - www.gs1us.org

GS1 US on LinkedIn

 

Connect with the guest:

Jesse Ness on LinkedIn

Check out Ecwid by Lightspeed


DIF Blog

DIF Technical Leaders Engage Korean Students at MegaStudy Academy

On 10th December 2024, Markus Sabadello, co-chair of the DIF Identifiers & Discovery WG, and Kyoungchul Park, chair of the DIF Korea SIG, delivered a guest lecture at Seoul's Gangnam MegaStudy academy, Korea's largest private educational institution. The one-hour session covered core decentralized identity technologies including

On 10th December 2024, Markus Sabadello, co-chair of the DIF Identifiers & Discovery WG, and Kyoungchul Park, chair of the DIF Korea SIG, delivered a guest lecture at Seoul's Gangnam MegaStudy academy, Korea's largest private educational institution.

The one-hour session covered core decentralized identity technologies including DIDs (Decentralized Identifiers), VCs (Verifiable Credentials), and digital wallets, along with DIF's activities and open-source projects like the Universal Resolver. Students demonstrated sophisticated understanding through their challenging questions about DID-Web3 relationships, VC proof mechanisms, and underlying trust models.

The MegaStudy IT Academy extended a generous welcome to DIF's Markus Sabadello, thanks to the efforts of the DIF Korea SIG and chair Kyoungchul Park

This session marks another milestone for the DIF Korea SIG since its launch in Busan in July 2023. Established alongside the Ministry of Science and ICT's Self-Sovereign Identity Technology Project, the SIG bridges DIF's technical standards with Korea's evolving digital identity ecosystem.

DIF's growing network of Special Interest Groups across Asia and Africa demonstrates our commitment to global collaboration while respecting local contexts. The Korea SIG exemplifies this approach through productive engagement between international standards and key national institutions. Learn more in our guest blog with Korea SIG Chair Kyoungchul Park.

Tuesday, 17. December 2024

FIDO Alliance

Biometric Update: Passkeys build momentum, enabling access to 15 billion online accounts

FIDO passkey adoption doubles in 2024 as major firms opt for passwordless log-in Passkeys are a biometric security trend to watch in 2025. The FIDO Alliance themed its 11th annual FIDO Tokyo […]

FIDO passkey adoption doubles in 2024 as major firms opt for passwordless log-in

Passkeys are a biometric security trend to watch in 2025. The FIDO Alliance themed its 11th annual FIDO Tokyo Seminar on how passkey adoption is accelerating, with presentations from Google, Sony Interactive Entertainment, Mastercard, and other organizations joining the journey to password-free living. Microsoft has confirmed its advice on how to make people love passkeys – as it sweeps aside a major vulnerability that exposed 400 million Outlook 365 users.

Major tech brands drive mainstreaming of passkey account log-ins

In 2024, Amazon made passkeys available to 100 percent of its users and has seen 175 million passkeys created for sign-in to amazon.com globally. Google says 800 million Google accounts now use passkeys, with more than 2.5 billion passkey sign-ins over the past two years and sign-in success rates improving by 30 percent. Sony adopted passkeys for the global Playstation gaming community and saw a 24 percent reduction in sign-in time on its web applications.

Hyatt, IBM, Target and TikTok are among firms that have added passkeys to their workforce authentication options. More credential management products offering passkey options means more flexibility for consumers.

Japan joins passkey party in private sector, academia

The Japanese market showed a notable turn toward passkeys, with Nikkei, Nulab and Tokyu Corporation among firms embracing passwordless authentication technology. Nikkei will deploy passkeys for Nikkei ID as early as February 2025. Tokyu Corporation says 45 percent of TOKYU ID users have passkeys. And Nulab announced a “dramatic improvement in passkey adoption.”

Academia is helping drive innovation, with teams from Keio University and Waseda University winning acknowledgement for their research and prototypes at a slew of hackathons and workshops.

And FIDO, of course, is there to offer support, now offering its Passkey Central website resource on passkey implementation in Japanese, so that Japanese companies can take better advantage of its introductory materials, implementation strategies, UX  and design guidelines and detailed roll-out guides.

The FIDO Japan Working Group, which includes 66 of the FIDO Alliance’s member companies, is now in its 9th year of working to raise passkey awareness in the country.

Monday, 16. December 2024

FIDO Alliance

Podcast: The Password Problem

In this episode of the Trust Issues podcast, host David Puner sits down with Andrew Shikiar, the Executive Director and CEO of the FIDO Alliance, to discuss the critical issues surrounding password […]

In this episode of the Trust Issues podcast, host David Puner sits down with Andrew Shikiar, the Executive Director and CEO of the FIDO Alliance, to discuss the critical issues surrounding password security and the innovative solutions being developed to address them. Andrew highlights the vulnerabilities of traditional passwords, their susceptibility to phishing and brute force attacks, and the significant advancements in passwordless authentication methods, particularly passkeys. He explains how passkeys, based on FIDO standards, utilize asymmetric public key cryptography to enhance security and reduce the risk of data breaches. 

The conversation also covers the broader implications of strong, user-friendly authentication methods for consumers and organizations, as well as the collaborative efforts of major industry players to make the internet a safer place. Additionally, Andrew highlights the importance of identity security in the context of these advancements, emphasizing how robust authentication methods can protect personal and organizational data. 

Tune in to learn about the future of authentication and the steps being taken to eliminate the reliance on passwords.


DIDAS

Switzerland’s e-ID Milestone: Parliament Resolves Differences, Final Vote Set for December 20, 2024

The Swiss Parliament has resolved all outstanding differences between the National Council and the Council of States regarding the Electronic Identity Act (BGEID), paving the way for a formal final vote scheduled for December 20, 2024. The implementation of SWIYU, encompassing both the electronic identity and its underlying trust infrastructure, has a potential to establish ...

The Swiss Parliament has resolved all outstanding differences between the National Council and the Council of States regarding the Electronic Identity Act (BGEID), paving the way for a formal final vote scheduled for December 20, 2024.

The implementation of SWIYU, encompassing both the electronic identity and its underlying trust infrastructure, has a potential to establish an open, interoperable ecosystem for digital credentials. This framework can  be a solid foundation for the secure exchange of authentic data, thus fostering trustworthiness across digital applications in public administration, the economy, and civil society. Key principles of SWIYU and the e-ID include privacy by design, data minimization, user-centricity, and a commitment to openness and collaboration. We, at DIDAS, expect SWIYU, when fully implemented,  to serve as an important building block  promoting confidence in the digital realm, boosting economic growth and digital inclusion.

The new Swiss electronic identity (e-ID) system takes a completely different approach compared to the model that was rejected by voters in 2021. Unlike the earlier proposal, which handed the responsibility for issuing and managing digital identities to private companies, the new system is entirely state-operated. This ensures that the government, as a public entity, is responsible for issuing e-IDs and maintaining the necessary infrastructure. This change directly addresses the privacy and security concerns raised previously, making societal control easier. The updated framework is designed around user empowerment, with privacy by design and data minimization as fundamental principles, ensuring transparency and building confidence in its use.

What’s truly transformative is the system’s decentralized architecture, drawing its inspiration from Self-Sovereign Identity (SSI) principles. This gives individuals control over their own digital identities, and ability to decide what information to share with 3rd parties, such as service providers. The design aligns with the “trust diamond” framework, which organizes four essential roles: the government as the issuer, individuals as the holders, service providers as the verifiers, and a governance framework that ensures everything operates within clear, enforceable and trusted rules. This structure creates a reliable and secure ecosystem for digital identity, addressing shortcomings of the previous E-ID vision resulting in a user-centric, privacy-preserving approach.

DIDAS is exceptionally proud to have made a number of key contributions to Switzerland’s efforts, ensuring the system reflects fundamental Swiss values such as federalism, direct democracy, self-determination, and autonomy. Since its inception in 2020, DIDAS has been a strong advocate for SSI principles, emphasizing user control over personal data and a need for a secure, privacy-preserving digital ecosystem. It has been an integral part of our vision that  a digital trust ecosystem must safeguard privacy but also enable economic value creation.

Early Advocacy and Strategic Vision

In October 2020, DIDAS was established with the primary goal of positioning Switzerland as a leader in developing and implementing privacy-preserving technologies, services, and products related to digital identity and electronically verifiable data. This vision laid the groundwork for a digital trust ecosystem that emphasizes data sovereignty and identity management based on tight alignment with the SSI principles.

Early Advocacy for Self-Sovereign Identity Principles

In December 2021, DIDAS published an explainer on SSI, outlining its core principles and the association’s commitment to establishing a viable and thriving SSI ecosystem. The DIDAS initiative aimed from the start to educate stakeholders and promote the adoption of SSI principles and frameworks within Switzerland’s digital infrastructure, for a more privacy preserving and frictionless digital future.

Contributing to the Dialog around  National e-ID Legislation

By October 2021, DIDAS had provided extensive commentary on Switzerland’s target vision for the e-ID system. The association advocated for an “ecosystem of digital proofs”, where the e-ID would serve as one credential among many, enabling both governmental and private entities to issue other types of  credentials. This approach aimed to create a flexible and future-proof foundation for digital interactions in Switzerland. 

In December 2021, following a public consultation, the Swiss Federal Council decided to orient the implementation of the future e-ID system based on Self-Sovereign Identity (SSI) principles. DIDAS welcomed this decision, recognizing it as a commitment to a decentralized solution architecture that prioritizes maximum privacy protection and positions the e-ID as a cornerstone of a broader ecosystem of digital credentials. “Ambition level 3” was anchoring the approach to build an ecosystem of(business-domain-) ecosystems in which, in addition to the E-ID, other verifiable credentials can be exchanged securely and reliably.

Promoting Technological Innovation

In its early stages, DIDAS members established an open sandbox environment to facilitate the development and testing of Self-Sovereign Identity (SSI) solutions. This sandbox provided a controlled setting where developers and organizations could experiment with SSI technologies, enabling the creation of interoperable and secure digital identity systems. By offering access to resources such as repositories and live demonstrations, DIDAS’s sandbox played a crucial role in iteratively advancing knowledge within Switzerland’s E-ID movement. 

DIDAS has consistently emphasized the importance of advanced digital signature technologies to enhance the Swiss e-ID framework. Following DIDAS’ statement in response to the E-ID technology discussion paper, and its recommendation for Scenario “A” as a feasible technical starting point in February 2024, in March 2024, the association proposed to adopt the concept of dual signatures. The technology approach to bridging the gap between the well established, but less feature rich cryptography and the new, but less known, techniques. Supported by  the US Department of Homeland Security, this technique involves attaching multiple digital signatures to a single payload, each offering distinct security or privacy features. This methodology enhances agility and robustness, accommodating various cryptographic standards and privacy needs without compromising data integrity.

Advocating for Economic Value Creation beyond the societal value of a self-sovereign E-ID

Beyond technological contributions, DIDAS has been a committed advocate for leveraging the e-ID programme by the Swiss Confederation, to establish a digital trust and authentic data exchange ecosystem that creates sustainable economic value. The Association further envisioned this future ecosystem removing friction in B2B and cross border processes, by enabling higher levels of assurance in automation, significantly reducing risk of fraud, simplifying the management of compliance as well as allowing for the proliferation of digital trust-based businesses and innovations. On the Basis of the DIDAS Sandbox, Members have been experimenting around common use cases to explore ecosystem value creation and are looking forward to support issuers, holders and verifiers as well as technology vendors, to further experiment with the confederation’s public beta infrastructure in early 2025.

In January 2024, during the World Economic Forum’s Annual Meeting in Davos, we collaborated with digitalswitzerland to co-organize the “Digital Trust” session at the digitalswitzerland Village. This event convened over 50 speakers and panelists, including industry leaders and policymakers, to discuss the critical role of digital trust in today’s interconnected world. 

 

 

In September 2024, at the event organized by the State Secretariat of International Finance (SIF) and the Swiss Financial Innovation Desk (FIND) at the Swiss Embassy in Singapore, we had the privilege of moderating and contributing to discussions on digital trust, emphasizing the importance of verifiable data and trust frameworks in global financial ecosystems. Our insights have also been shaping a soon-to-be-published paper, where DIDAS explores key principles and practical strategies to advance digital trust. 

Collaborative Efforts and Future Outlook

We strongly believe that DIDAS’s collaborative approach, engaging, as a non for profit, independent association, with government bodies, private sector stakeholders, and civil society, has been instrumental in shaping Switzerland’s digital identity efforts. The association’s commitment to a pragmatic, principle-based, iterative, and inclusive methodology has ensured that the SWIYU’s vision aligns with both national interests and international standards.

 

 

As Switzerland prepares the final approval of the e-ID legislation on December 20, 2024, the foundational work of DIDAS continues to be important.  We have a lot of work ahead of us to support the adoption of the E-ID and its mechanisms of exchanging authentic data. We further see our role in helping to  increase the fluency of business leaders and innovators in applying these mechanisms.  We’ll use the combined expertise of our members and our energy to promote and further enhance the key aspects of Ambition Level 3 governance and cross-ecosystem interoperability. Continued experimentation and dialogare unavoidable, in order to uncover and realize business value of this emerging Trust Infrastructure. 

We are also proud to co-organize DICE (Digital Identity unConference Europe) in collaboration with Trust Square and the Internet Identity Workshop (IIW) rooting in Mountain View, California. DICE first launched in 2023 and exceeded expectations with 160 expert participants contributing to dynamic discussions. The second DICE in 2024 was a milestone, opened by Federal Councilor Beat Jans, underscoring the importance of these participatory conferences and their contribution to the development of the E-ID Framework and the Swiss Trust Infrastructure. DICE fosters joint learning, evolves collective thinking, and accelerates the adoption of digital identity and verifiable data solutions. In 2025, two events are planned, further advancing open dialogue as a cornerstone for collaboration for authenticity and trust in the digital realm.

 

 

The association’s vision of a secure, adaptable, and authentic data ecosystem built on SSI principles underlines its dedication to a sustainable digital environment that favors privacy and security, while enabling significant economic value creation. 

We look forward to continuing to create positive impact with all of our members, partners and other stakeholders.

Cordially, The DIDAS Board

 

Further Articles and details on contributions in the DIDAS Blog

Saturday, 14. December 2024

Human Colossus Foundation

Journée de la Protection des Données

In the digital age, freedom of choice is profoundly affected by the way data is collected, shared and used. This freedom of choice is closely linked to the notion of privacy. Based on its work, the Foundation will present an approach where digital technology is used to empower patients, enabling them to make informed decisions about their health, while contributing to significant advances in
La liberté de choix à l’ère numérique

(an english translation can be found at the end)

La Fondation Human Colossus contribue à la conférence publique du mercredi 28 janvier 2025 organisée par la Faculté de droit, des sciences criminelles et d’administration publique (FDCA) de l’Université de Lausanne (inscription requise).

À l'ère du numérique, la liberté de choix est profondément influencée par la manière dont les données sont collectées, partagées et utilisées. La liberté de choisir est un concept fondamental qui revêt une importance particulière avec l'avènement des technologies numériques. Cette liberté de choix est intimement liée à la notion de sphère privée. Avec Internet et autres réseaux nous sommes confrontés à un large éventail de choix dans tous les aspects de notre vie quotidienne. Que ce soit dans le domaine des achats en ligne, des réseaux sociaux, des services bancaires en ligne ou dans le domaine de la santé, nous sommes constamment sollicités pour prendre des décisions aussi bien au niveau personnel que professionnel. Avec les outils d’intelligence artificielle qui s'immiscent dans notre quotidien notre sphère privée est-elle encore suffisamment protégée pour garantir autodétermination informationnelle ?

En prenant l’exemple de la médecine personnalisée dans le contexte de la liberté de choix à l'ère numérique, il est clair que l'accès aux données personnelles de santé et leur contrôle sont cruciaux. La médecine personnalisée s'engage à fournir des diagnostics et traitements individualisés. Cela nécessite des outils technologiques pour gérer les informations de santé de manière proactive afin de garantir que ces données soient utilisées de manière éthique et sécurisée. La technologie numérique doit aussi être utilisée pour renforcer l'autonomie des patients en leur permettant de prendre des décisions éclairées sur leur santé, tout en contribuant à des avancées significatives dans la recherche médicale.

Basée sur nos travaux, la Fondation présentera ces concepts à travers les enjeux actuels en Suisse liés au projet E-ID d’identité numérique et à son impact sur l’écosystème de la santé.  

Journée de la Protection des Données

Freedom of Choice in the digital age

The Human Colossus Foundation contributes to the public conference (in french) on Wednesday 28 January 2025 organised by the Faculty of Law, Criminology and Public Administration (FDCA) of the University of Lausanne (registration required).

In the digital age, freedom of choice is profoundly affected by the way data is collected, shared and used. This freedom of choice is closely linked to the notion of privacy.

With the Internet and other networks, we are faced with a wide range of choices in all aspects of our daily lives. Whether it's online shopping, social networking, online banking or healthcare, we are constantly being asked to make decisions at both a personal and professional level. With artificial intelligence tools making their way into our daily lives, is our privacy sphere still well protected to guarantees informational self-determination?

Taking the example of personalised medicine in the context of freedom of choice in the digital age, it becomes clear that access to and control of personal health data are crucial. Personalised medicine is committed to providing individuals with individualised diagnoses and treatments. This requires technological tools to proactively manage their health information to ensure that it is used ethically and securely. Digital technology must also be used to empower patients, enabling them to make informed decisions about their health, while contributing to significant advances in medical research.

Based on its work, the Foundation will present these concepts through the current issues in Switzerland linked to the E-ID digital identity project and its impact on the healthcare ecosystem.

The Human Colossus Foundation is a neutral but technology-savvy Geneva-based non-profit foundation under the surveillance of the Swiss federal authorities. 

Subscribe to our newsletter

Friday, 13. December 2024

FIDO Alliance

ASRock Industrial Sets New Standard in Secure IoT Deployment with FDO Device Onboard

Imagine connecting and configuring devices on an oil rig in the middle of the ocean with limited human intervention. That’s the reality of what can be achieved with the FIDO […]

Imagine connecting and configuring devices on an oil rig in the middle of the ocean with limited human intervention. That’s the reality of what can be achieved with the FIDO Alliance’s Device Onboarding (FDO) standard. This is an example of the applications that IoT pioneer ASRock Industrial is bringing to life.

The rapid proliferation of IoT devices and Edge computing across industries has brought with it unprecedented opportunities and challenges. By 2025, over 75 billion IoT devices are expected to be connected globally, increasing complexities in device management and widening the attack surface for malicious actors. Recent studies suggest nearly 57% of IoT devices are susceptible to medium or high-severity attacks.

Corporate Overview

ASRock Industrial, a global leader in industrial systems and motherboards, has become one of the first vendors to provide FDO-enabled compute solutions for industrial applications. The company offers industrial PC systems, motherboards, edge computers, and other products for industries such as automation, robotics, entertainment, and security, as well as cutting-edge systems for smart cities, energy firms, pharmaceuticals, automotive and more to customers around the world. ASRock Industrial is leading the way in the industrial IoT industry with its FDO certified solutions that make device onboarding more efficient, less vulnerable, and more scalable.

“FDO’s advanced security framework enables us to deliver unparalleled reliability and adaptability, empowering our clients to scale confidently in increasingly complex environments.” – Kenny Chang, Vice President of Product and Marketing Division, ASRock Industrial

On the Edge: The Challenges of Industrial IoT

ASRock Industrial’s customers, like many in the industry, face challenges when deploying IoT devices and edge computing solutions quickly and securely. 

Security vulnerabilities: Traditional manual onboarding methods leave devices vulnerable to unauthorized access and data breaches. For example, a connected IoT device may still have the original manufacturer’s default password in place, which increases the risk of password-related device compromises. Manual processes also increase the risk of exposed, unmanaged devices on the network. In industries like energy and transportation, secure operations are vital to public safety and system reliability. Time and cost inefficiencies: Not only are manual processes time-consuming, hiring skilled installers is extremely expensive. When calculating the time and cost for a skilled engineer to manually onboard edge devices, it’s important to include not only the technical setup time but also the travel time to what potentially may be multiple sites. ASRock Industrial estimates that before FDO, users could spend up to $1,000 per device implementation*.  With FDO the installation is not only much faster and more secure, but it is also a task that can often be handled by existing on-site staff. Complexity and scalability: Legacy onboarding approaches are complex to deploy and manage. This complexity is only further exacerbated by the remote and high-risk environments many industrial applications are in. Sending skilled engineers to these environments not only creates bottlenecks and slows scalability, it introduces safety risks that further amplify costs. Lack of interoperability: The IoT space is very fragmented, with multiple proprietary platforms and operating systems. Existing “zero-touch” solutions are restricted in compatibility, making it hard to support clients across different sectors. Creating an FDO Solution

To solve these challenges, ASRock Industrial turned to FIDO Device Onboard (FDO), and in doing so has become one of the market’s earliest adopters of this compelling technology. ASRock Industrial has integrated FDO into its flagship iEP-5010G series, a robust edge controller built for demanding industrial applications and harsh environments. The iEP-5010G series can operate within a wide temperature range of -40 to 70 degrees and supports 6-36VDC power inputs, 4G LTE, 5G, Wi-Fi 6E, and Bluetooth, and offers the most flexible I/Os and expansion options, making it a fit for industrial automation, robotics, transportation and more.

The ASRock Industrial FDO solution has been designed with FDO’s advanced features in mind. It delivers end-to-end FDO onboarding capabilities, encompassing all critical FDO functions: manufacturer, owner and rendezvous server. 

Rather than hard programming devices for each different operating system, the iEP-5010G series device controller can be deployed as one system without pre-installation of OS or additional programming. This simplifies manufacturing and provides a better customer experience with the flexibility to decide OS requirements later in the process.

The FDO standard and associated certification program ensure consistency and interoperability. Standardized onboarding means devices are consistently and correctly deployed every time, removing the risk of errors for ASRock Industrial’s customers. Most importantly, the open standards-based approach means it can work seamlessly with other partners in the industry and support players across the globe.

Results and Impact

While early implementation results are still being gathered, ASRock Industrial anticipates significant benefits for both the company and its customers.

One of ASRock Industrial’s earliest use cases lies in the smart city domain, where their FDO-enabled iEP-7020E series devices leverage FDO technology to automatically onboard hardware and software to connect electric vehicle (EV) charging points and related devices seamlessly. By enabling remote monitoring of charging stations across multiple locations, FDO has eliminated the need for engineers to visit sites physically. Its AI-driven analytics have dramatically enhanced operational efficiency, while remote surveillance has addressed key challenges such as charger hogging, vandalism, and unauthorized access. This capability ensures more efficient and timely incident management. As urban demands evolve, FDO serves as a robust foundation for scalable, secure deployments, delivering sustained benefits over time.

Looking Ahead

ASRock Industrial’s investment in FDO puts us in a prime position to meet the rigorous demands of Industry 4.0 advancements and provide customers with security levels that protect against the expanding edge threat landscape. In 2024, ASRock Industrial became one of the first to achieve FDO certification, passing the FIDO Alliance’s rigorous independent testing processes. The results of this testing demonstrate that ASRock Industrial’s products fully meet the FDO specification, meaning partners and clients can trust the security, interoperability and FDO functionality of these solutions.

FDO certification also plays an important role in differentiating ASRock Industrial by making their products more marketable in that they are capable of meeting the needs of a growing number of RFPs that call out FDO. Additionally, it reduces the company’s need to spend time and effort in intensive vendor bake-offs, allowing ASRock Industrial to spend more time innovating its product lines and value-added services.

“Deploying FDO has marked a pivotal shift for ASRock Industrial, establishing a new benchmark in secure, scalable onboarding for industrial edge AIoT solutions. This deployment cements ASRock Industrial’s leadership in industrial computing security and sets the stage for us to shape the future of Industry 4.0 with solutions that are both resilient and future-ready.”– Kenny Chang, Vice President of ASRock Industrial

Read the Case Study

DIDAS

Exploring the Future of Legal Entity Identities in the E-ID Ecosystem

The development of a robust ecosystem around the upcoming E-ID implementation in Switzerland represents an essential next step in building Trust Infrastructure. For us at DIDAS it has always been vital to emphasize that while the E-ID verifiable credential is very important, it is still just a building block, the first step towards a much ...

The development of a robust ecosystem around the upcoming E-ID implementation in Switzerland represents an essential next step in building Trust Infrastructure. For us at DIDAS it has always been vital to emphasize that while the E-ID verifiable credential is very important, it is still just a building block, the first step towards a much larger ecosystem where a variety of verifiable credentials will be issued and exchanged on a daily basis.

Many of these credentials, as well as the processes depending on them, will be implemented in the private sector, the vision known as “Ambition Level 3”. This is where the real economic value will come from. Much like the road infrastructure is built by the state and then fuels the economy, so will the Trust Infrastructure serve as a critical foundation, a privacy preserving enabler, for all kinds of native digital processes in the near future.

The E-ID framework, including the underlying Trust Infrastructure, is primarily targeting personal identity and credentials. To realize the complete potential of the Credentials’ Ecosystem, however, the open topics around the organization identity – the identity of an organization itself as well as that of their representatives – must be addressed.
Unlike natural persons, legal entities require unique considerations for authentication, governance, and compliance. A well-designed solution for legal entity identities could unlock significant opportunities for global trade, regulatory compliance, and business interoperability.

To succeed, the chosen approach must be flexible, future-proof, and globally scalable. Not only from a technical standpoint but also from a governance perspective. A scalable solution must accommodate the widest variety of jurisdictions, regulatory requirements, and business use cases, ensuring it is both technically sound and broadly acceptable.

The Role of vLEI Credentials

One of the most promising solutions in this space is the Verifiable Legal Entity Identifier (vLEI) credentials ecosystem. Pioneered by the Global Legal Entity Identifier Foundation (GLEIF), vLEI credentials aim to provide an extensible basis for an electronically verifiable and trustworthy way to identify legal entities and their representatives in the digital world. GLEIF, a globally recognized authority, oversees the issuance of Legal Entity Identifiers (LEIs) that are already widely used in the financial sector to improve transparency and reduce risk.

The vLEI system builds upon this foundation by leveraging cutting-edge technology to ensure verifiability and scalability. However, despite its many advantages, the underlying technical framework—centered around Key Event Receipt Infrastructure (KERI) and Authentic Chained Data Containers (ACDC)— has proven rather challenging to grasp.

Deep Dive into the vLEI and its technical foundations

To make it easier to understand and appreciate the solution, we have undertaken a deep dive into both the governance and technical aspects of the vLEI ecosystem. Our goal is to provide an accurate insight into the key mechanisms and characteristics that make vLEI ecosystem a prime candidate to serve as the backbone for use-cases and applications that require a globally scalable legal entity identity framework.

First, we take a look at the governance, highlighting the key aspects as to why vLEI ecosystem can meet the diverse regulatory requirements of jurisdictions worldwide while accommodating the varied needs of businesses. Then, on the technical side, we are exploring the foundational KERI/ACDC technology, which promises enhanced security, efficiency, transparency and interoperability.

To learn with us, watch the recorded deep-dive session on vLEI and its technical underpinnings.

Stay tuned for more insights as we continue this journey toward shaping the future of legal entity identities in the digital era.

Together, let’s make the vision of a seamless, privacy-first digital ecosystem a reality. Stay connected and be part of the transformation!

Thursday, 12. December 2024

FIDO Alliance

Passkey Adoption Doubles in 2024: More than 15 Billion Online Accounts Can Leverage Passkeys for Faster, Safer Sign-ins

Momentum continues in Japan with notable passkey success stories and deployments from Nikkei, Tokyu, Google, Sony Interactive Entertainment, KDDI, LY Corporation, Mercari and NTT DOCOMO TOKYO, December 12, 2024 – […]

Momentum continues in Japan with notable passkey success stories and deployments from Nikkei, Tokyu, Google, Sony Interactive Entertainment, KDDI, LY Corporation, Mercari and NTT DOCOMO

TOKYO, December 12, 2024 – More than 15 billion online accounts can use passkeys for faster, safer sign-ins – more than double than this time last year. The momentum behind FIDO and passkeys is the focus of today’s 11th annual FIDO Tokyo Seminar, where hundreds gathered to learn about the latest developments in the global push to eliminate dependence on passwords. Presenters include those from Google, Sony Interactive Entertainment, Mastercard, Waseda University, the Institute of Information Security, KDDI, LY Corporation, Mercari and NTT DOCOMO.

Passkeys become more widely available for consumer and workforce applications – and companies are seeing the benefits 

Passkeys provide phishing-resistant security with a simple user experience far superior to passwords and other phishable forms of authentication. Many consumer brands are reporting passkey success stories and business benefits; some notable new and recent announcements include: 

Amazon made passkeys available to 100% of its users, including in Japan, this year and already has 175 million passkeys created for sign-in to amazon.com across geographies. Google recently reported that 800 million Google accounts now use passkeys, resulting in more than 2.5 billion passkey sign-ins over the past two years. Also, Google’s sign-in success rates have improved by 30% and sign-ins speeds have increased by 20% on average. Sony Interactive Entertainment, the company behind PlayStation, released passkeys as an alternative option to passwords for their global gaming community and observed a 24% reduction in sign-in time on its web applications for passkey users. Additionally, high conversion rates have been observed, with 88% of customers who are presented with the benefits of passkeys successfully completing enrollment.

Adoption also grew in the workforce this year as more companies bolstered their authentication options with passkeys, including Hyatt, IBM, Target and TikTok.

Consumers gained flexibility and choice for passkey management this year, as more credential managers, such as Apple, Google, Microsoft, 1Password, Bitwarden, Dashlane and LastPass expanded their passkeys support cross-ecosystem, and the FIDO Alliance announced new draft specifications for users to securely move passkeys and all other credentials across providers.

Notable Momentum in Japan

Specifically in Japan, new passkeys deployments and success were announced from Nikkei Inc., Nulab Inc., and Tokyu Corporation:

Nikkei Inc. unveiled their plan to deploy passkeys for Nikkei ID, for the millions of Nikkei ID customers to begin their migration from passwords to passkeys. This will be launching in February 2025 or later. Nulab Inc. announced their dramatic improvement in passkey adoption for Nulab accounts based on the outcome of the Passkey Hackathon Tokyo this past November. Tokyu Corporation has reported that 45% of TOKYU ID users have passkeys, and sign-ins with passkeys are 12 times faster than a password plus an emailed OTP.

Additionally, Nikkei Inc., Nulab Inc. and Tokyu Corporation all successfully demonstrated their passkey implementations at the Passkey Hackathon Tokyo, organized by Google and sponsored by FIDO Alliance, in June 2024. Companies receiving awards included Nulab and Tokyu, as well as two teams of students from Japanese universities:

Keio University team received the grand winner award for adopting passkeys combined with an IoT device – a smart door lock created by a 3D printer. Waseda University team received another FIDO award for their unique user authentication protocol and implementation combined with passkeys, verifiable credentials and zero-knowledge proofs.

In addition to these two teams, a group at the Institute of Information Security (Yokohama, Japan) presented their research entitled “A Study on Notification Design to Encourage General Users to Use Passkeys” at a workshop organized by the Information Processing Society of Japan (IPSG) on December 4, 2024. These activities demonstrate how students in academia are embracing passkeys as an attractive option for life without passwords.

Organizations that have already deployed passkeys for more than a year shared new successes:

KDDI now has more than 13 million au ID customers now using FIDO and has seen a dramatic decrease (nearly 35%)  in calls to its customer support center as a result. Managing FIDO adoption carefully for both subscribers and non-subscribers. LY Corporation property Yahoo! JAPAN ID now has 27 million active passkeys users. Approximately 50% of user authentication on smartphones is now passkeys. LY Corporation said that passkeys have a higher success rate over SMS OTP and achieve 2.6 times faster. Mercari has 7 million users enrolled in passkeys, and enforcing passkey login for synced passkeys enrolled users of Mercari. Notably, there have been zero phishing incidents at Mercoin, a Mercari subsidiary since March 9, 2023. NTT DOCOMO has increased its passkey enrollments and now passkeys are used for approximately 50% of authentication by account users. NTT DOCOMO notably reports significant decreases in successful phishing attempts and there have been no unrecognized payments at docomo Online Shop since September 23, 2022.

To drive further adoption in Japan, the FIDO Alliance announced that Passkey Central, the website for consumer service providers to learn more about why and how to implement passkeys for simpler and more secure sign-ins, is now available in Japanese. Passkey Central provides visitors with actionable, data-driven content to discover, implement, and maintain passkeys for maximum benefits over time. The comprehensive resources on Passkey Central include:  

Introduction to passkeys Business considerations and metrics  Internal and external communication materials Implementation strategies & detailed roll-out guides   UX & Design guidelines Troubleshooting And more implementation resources, such as glossary, Figma kits, and accessibility guidance

Along with the many in Japan, there are 66 of the FIDO Alliance’s 300+ member companies actively taking part in the FIDO Japan Working Group (FJWG). The FJWG is now beginning its 9th year working together to spread awareness and adoption of FIDO in the region.

Consumers and workforce users are aware of, and want to use, passkeys

Passkeys are not only available across a wide array of services, but recent studies have shown that consumers and workforce users are aware of, and want to use, passkeys. Recent FIDO Alliance research shows that in the two years since passkeys were first made available, consumer awareness has risen by 50%, up from 39% in 2022 to now 57% in 2024. Consumers also report that when they adopt at least one passkey, 1 out of 4 enables passkeys whenever possible. A majority of consumers also believe passkeys are more secure (61%) and more convenient than passwords (58%). Since 2023, consumers from APAC reported passkey awareness has grown significantly more when compared to the global average and other countries in 2024. Consumers from China (80%), India (70%), Japan (62%), and Singapore (58%) reported significantly higher passkey adoption in the last year, with Australia (52%) and South Korea (44%) trending close to the overall average (59%).

Sources:

Online Authentication Barometer 2024: Consumer Trends & Attitudes on Authentication Methods.
https://fidoalliance.org/research-findings-consumer-trends-and-attitudes-towards-authentication-methods/

Consumer Password & Passkey Trends: World Password Day 2024.
https://fidoalliance.org/content-ebook-consumer-password-and-passkey-trends-wpd-2024/

About the FIDO Alliance

The FIDO (Fast IDentity Online) Alliance, www.fidoalliance.org, was formed in July 2012 to address the lack of interoperability among strong authentication technologies, and remedy the problems users face with creating and remembering multiple usernames and passwords. The FIDO Alliance is changing the nature of authentication with standards for simpler, stronger authentication that define an open, scalable, interoperable set of mechanisms that reduce reliance on passwords. FIDO Authentication is stronger, private, and easier to use when authenticating to online services.


DIF Blog

BBS: Where Proof Meets Privacy

Building the Future of Digital Privacy: How you can Contribute, Implement, and Advocate The applied cryptography community is making significant strides in standardizing BBS signatures and their extensions - a crucial development for privacy-preserving digital credentials. This work represents a major step forward in enabling more private and secure digital

Building the Future of Digital Privacy: How you can Contribute, Implement, and Advocate

The applied cryptography community is making significant strides in standardizing BBS signatures and their extensions - a crucial development for privacy-preserving digital credentials. This work represents a major step forward in enabling more private and secure digital interactions while maintaining the necessary balance between privacy and accountability.

What is BBS?

BBS is a secure digital signature mechanism that proves information is authentic and unchanged, similar to how a notary validates physical documents. Unlike other digital signature mechanisms, BBS enables powerful privacy features while maintaining security. Its name comes from its creators – cryptographers Dan Boneh, Xavier Boyen, and Hovav Shacham. This combination of security and privacy makes it particularly well-suited for digital credential systems, where protecting both authenticity and user privacy is crucial.

Why BBS Matters

BBS signatures provide a unique combination of privacy and practical utility that makes them especially valuable for digital credentials.

From a technical perspective, BBS stands out for:

Constant-size signatures regardless of the number of messages signed True unlinkability between different uses of the same credential The ability to sign and selectively reveal multiple messages within a single signature

These technical properties translate into practical benefits for digital credentials:

Selective disclosure: Users can prove specific facts about their credentials (like their city of residence) without revealing other details (like their full address) Unlinkable disclosure: Each privacy-preserving use of a credential cannot be traced to other uses Anti-theft features: Credentials can be cryptographically bound to their owner while maintaining privacy Controlled recognition: Services can securely recognize returning users without enabling cross-service tracking

Together, these capabilities enable privacy-preserving digital credentials that are both secure and practical for real-world deployment - from government IDs to professional certifications to age verification systems.

BBS Standards Landscape

The standardization of BBS involves several complementary efforts across standards bodies: 

Core Technical Specifications

BBS and its extensions are currently undergoing standardization within the Crypto Forum Research Group (CFRG) of the Internet Research Task Force (IRTF). (The IRTF is related to the IETF but focuses on longer term research related to the Internet.) 

The Decentralized Identity Foundation hosts the development of this work in the Applied Crypto Working Group.

This work is currently represented by 3 specifications:

"BBS Signatures": BBS Signatures are a privacy-preserving way to sign digital credentials. They let you prove specific facts about your credentials (like your city of residence) without revealing other details (like your full address). Specification: The BBS Signature Scheme "Blind BBS": Blind BBS enables credential issuance where the issuer can cryptographically sign information without seeing its contents - useful for privacy-preserving identity binding. Specification: Blind BBS Signatures “BBS Pseudonyms”: BBS Pseudonyms are an anti-fraud mechanism to prevent digital credential cloning. They can be used by verifiers (like websites) to identify someone they've interacted with before, but in a way that cannot be correlated across different verifiers. Specification: BBS per Verifier Linkability

Status: The first item, BBS Signatures, is a mature working group document that completed an initial review by the CFRG panel. The other two – Blind BBS and BBS Pseudonyms – are on their way to adoption, and they could benefit from your support, as described below.

Implementation Standards

The W3C Verifiable Credentials Working Group is building on this foundation by developing the Data Integrity BBS Cryptosuite specification. This work integrates BBS signatures into W3C Verifiable Credentials, provides comprehensive test suites, and ensures that implementations across different platforms will be interoperable and reliable.

Call to Action 1. Voice Your Support

Urgent: Deadline December 20th, 2024

The CFRG has opened an official adoption call for both the Blind BBS and BBS Pseudonyms specifications. This is a crucial moment for these privacy-enhancing technologies.

Update: these specifications have been accepted! Thank you for your support.

Your voice matters - if you care about privacy-preserving technologies, please participate in the vote and share your support.

2. Get Involved

Want to dive deeper into this work? There are several ways to engage based on your interests: 

Developers: Contribute to W3C test suites or implement any of the specifications to test them out Standards developers: Join the discussion at any of the above standards groups Cryptographers: Review and provide feedback on the specifications and join the technical discussions Enthusiasts / everyone: If you want to follow along with the progress, subscribe to DIF’s blog for updates

To participate in DIF’s Applied Crypto Working Group, you can join DIF; contact us at membership@identity.founation if you have any questions.

Wednesday, 11. December 2024

Energy Web

Energy Web Unveils Fully Managed Worker Node on Launchpad

Simplifying Decentralized Computation Energy Web is proud to announce the launch of its fully managed Worker Node offering, now available through the Energy Web Launchpad SaaS platform. This innovative solution provides organizations with a powerful, streamlined way to execute decentralized computation while bridging technical complexity with operational simplicity What is the Worker Node?
Simplifying Decentralized Computation

Energy Web is proud to announce the launch of its fully managed Worker Node offering, now available through the Energy Web Launchpad SaaS platform. This innovative solution provides organizations with a powerful, streamlined way to execute decentralized computation while bridging technical complexity with operational simplicity

What is the Worker Node?

The Worker Node is an off-chain runner designed to execute custom logic using Node-RED flows. Its lifecycle and operational parameters are managed through Energy Web X (EWX) worker node pallet solutions and solution group definitions.

Each Worker Node is equipped with a dedicated Worker Account, which is seamlessly linked to an EWX Operator Account. This linkage enables the Worker Node to continuously monitor on-chain actions, ensuring responsive adjustments to Operator Account solution group subscriptions.

Revolutionary Capabilities

The Worker Node introduces a host of advanced features to support decentralized computation:

Atomic Decentralized Computation: Acts as the foundational unit for decentralized computation networks, driving DePIN (Decentralized Physical Infrastructure Networks) use cases. Lightweight and Blockchain-Controlled: Fully managed via blockchain actions for secure and efficient operations. Low-Code Simplicity: Powered by the Node-RED runner engine, enabling rapid deployment within a mature low-code environment. Flexible Hosting: Supports diverse hosting options to suit varying user requirements. Constantly Evolving: Regular updates based on feedback from early adopters ensure the Worker Node remains cutting-edge. Why Choose the Worker Node Launchpad Offering?

The Launchpad’s fully managed Worker Node offering is the ideal choice for users seeking reliability and simplicity:

Eliminate the need to keep hardware, such as laptops, running 24/7. Access a reliable, server-based solution supported by a dedicated team to handle any issues. Transition seamlessly from the Marketplace Desktop App Worker Node to the managed SaaS alternative. A Glimpse into the Future

The Energy Web ecosystem continues to grow, with exciting developments on the horizon, including a new Marketplace Web App to enhance modularity and flexibility. To celebrate the launch, Energy Web is offering 25 exclusive, one-month 100% discount codes for the Worker Node Managed Offering, valid until March 2025.

This exclusive trial empowers users to explore the Worker Node Launchpad Offering risk-free, with the option to continue or revert to the Marketplace app afterward — ensuring maximum flexibility.

Get Started Today

Discover the transformative potential of the Worker Node through detailed documentation and resources designed to help users transition effortlessly between the Marketplace app and the Launchpad offering.

The next few months promise exciting updates from Energy Web. Stay tuned for more surprises as we continue to expand the boundaries of decentralized technology.

About Energy Web
Energy Web is a global technology company driving the energy transition by developing and deploying open-source decentralized technologies. Our solutions leverage blockchain to create innovative market mechanisms and decentralized applications, empowering energy companies, grid operators, and customers to take control of their energy futures.

Energy Web Unveils Fully Managed Worker Node on Launchpad was originally published in Energy Web on Medium, where people are continuing the conversation by highlighting and responding to this story.

Tuesday, 10. December 2024

Elastos Foundation

ELA Arbiters: The Final Piece in BeL2’s Vision for Bitcoin DeFi

2024 has been a breakthrough year for Bitcoin engineering, driven by the innovative toolsets provided by the Elastos SmartWeb ecosystem. The introduction of BeL2 (Bitcoin-Elastos Layer 2) has redefined what is possible for decentralized finance (DeFi) on Bitcoin. With a vision to make Bitcoin “smart” BeL2 provides a completely decentralized clearing network, enabling Bitcoin to […]

2024 has been a breakthrough year for Bitcoin engineering, driven by the innovative toolsets provided by the Elastos SmartWeb ecosystem. The introduction of BeL2 (Bitcoin-Elastos Layer 2) has redefined what is possible for decentralized finance (DeFi) on Bitcoin. With a vision to make Bitcoin “smart” BeL2 provides a completely decentralized clearing network, enabling Bitcoin to engage with cross-chain smart contracts while remaining on its secure main network. Imagine if Bitcoin could talk with other blockchains, execute complex contracts, and unlock its dormant potential—this is BeL2’s transformative promise.

BeL2 and the New Bretton Woods Vision

Since Bitcoin’s inception in 2009, it has grown to a $1.9 trillion market cap, cementing its role as the most secure and trusted cryptocurrency. However, its programmability and financial utility have remained limited compared to other blockchains. Solutions like wrapped Bitcoin (WBTC) have emerged but rely on centralized custodians to access smart contracts, undermining decentralization and sparking fierce debate over company ownership.

BeL2 disrupts this model by ensuring interoperability without transferring assets. Instead of moving Bitcoin across chains, BeL2 transmits messages, also known as proofs, which allow smart contracts on Turing-complete blockchains to verify and execute complex financial operations based on collateralisation on Bitcoin. This trustless model preserves Bitcoin’s integrity while enabling applications like loans, exchanges, and stablecoin issuance—laying the foundation for a Bitcoin-backed “New Bretton Woods” system. BeL2 integrates four key elements to realize its vision for Native Bitcoin DeFi:

Collateralization: Bitcoin is locked in non-custodial, native scripts on its mainnet, ensuring maximum security and decentralization for owners. Verification: Zero-Knowledge Proofs (ZKPs) generate verifiable cryptographic proofs for Bitcoin transactions, providing trustless verification for Layer 2 applications. Communication: The BTC Oracle bridges proofs from Bitcoin into Ethereum Virtual Machine (EVM) smart contracts, enabling cross-chain interactions. Execution: Decentralized Arbiter nodes facilitate time-based execution and dispute resolution, ensuring fairness and trust in financial transactions.

Together, these components create a robust protocol that unlocks the full potential of Bitcoin for DeFi, providing developers with the ability to build smart contract applications which open up Bitcoin Finance while maintaining its security ethos.

ELA Arbiters: The Final Piece of the Puzzle

The Arbiter network is the final layer of BeL2’s V1 protocol, providing execution services for Bitcoin-backed transactions, resolving disputes, and maintaining trust through decentralized and collateralized mechanisms in return for fees. At the heart of this system lies ELA, a Bitcoin-secured BTCFi reserve asset fortified by merge mining. By leveraging Bitcoin’s immense hash power, ELA inherits uncompromised security without extra energy costs. Its fixed supply and transparent emission schedule make ELA an ideal collateral asset, anchoring BTCFi with Bitcoin-level trust. As the “queen” to Bitcoin’s “king,” ELA is used as collateral to Arbiters nodes on a network to provide a secure, reliable Native Bitcoin DeFi environment.

This month, on the 30th of December, BeL2 will be releasing the Beta version of its Arbiter system. Key points to first understand:

The Beta stage marks the release of a product to an initial group of community users for testing and feedback. For security purposes, BeL2 will implement a 3-phase rollout, beginning in December and concluding in April. Phase one, launching this month, introduces the Beta version. The BeL2 Arbiter Beta will impose a $100 maximum limit on ELA collateral deposits. Collateral can be provided in ELA or ELA BPOS NFTs. Initially, rewards will be issued exclusively in ELA. However, upcoming applications and the scaling of Arbiters over the next few months will introduce utility for BTC rewards. BTC Lending: A Flagship BeL2 Use Case

The Arbiter network will first be worked into supporting BeL2 BTC lending demo, the revolutionary application developed by the team to support the validation of underlying infrastructure for Native Bitcoin DeFi:

Secure Lending: Borrowers collateralize BTC without transferring it off the mainnet. No Forced Liquidations: Fixed interest rates protect borrowers from short-term price volatility. Transparent Dispute Resolution: Arbiter nodes ensure fair outcomes for all parties involved.

Criteria for Joining the Fully Rolled out Arbiter Network

Beyond Beta, once the network has stabilized and all phases of the rollout are complete, users will be required to meet the following criteria to join the finalized Arbiter Network:

A Dedicated BTC Wallet: Required for secure custody and dispute resolution. Exit Flexibility: Arbiters can exit the network if no active arbitration commitments are pending, allowing them to manage their participation. Stake ELA or BPoS NFTs: A minimum stake of 1,000–5,000 ELA is recommended to ensure commitment and secure arbitration responsibilities. Define Term End Date: Arbiters must set a staking duration, with longer terms increasing their selection chances. Purpose of Staking: Staked assets act as collateral, guaranteeing impartiality and commitment in arbitration events. Set Your Fee Rate: Arbiters define an annual percentage during registration (e.g., 12%). Example Income: A 2-month arbitration task with 10,000 ELA staked at 12% annual interest would yield 200 ELA. Aligned Rewards: Fees ensure that Arbiters are compensated for their role in securing transactions. Event Monitoring: Promptly submit cryptographic signatures for arbitration events. No Judgment Needed: Arbiters verify predefined events without adjudicating disputes, simplifying the process. Manual Operations: Tasks can be performed via a web interface, though timeliness is critical to avoid penalties.

Applications requiring arbitration must:

Register with the Network: Initially approved by administrators, transitioning to DAO governance over time. Log Transactions: dApps must log all transactions with Arbiter contracts at creation to ensure future arbitration is possible.

Fees for arbitration end when:

An arbitration request is initiated. The transaction is closed or reaches its deadline. A Vision Realized

The introduction of Arbiters completes BeL2’s foundational layer, enabling trustless, decentralized financial applications on Bitcoin. This 3-phase rollout marks a milestone in Bitcoin’s evolution from a store of value to a programmable asset that underpins a global, decentralized financial system. BeL2 is on track to redefine how Bitcoin interacts with the world, unlocking over $1 trillion in dormant value and empowering its community to embrace a future free from custodial risks and centralized limitations.

Join the BeL2 Movement

As an Arbiter Beta participant, you are not merely engaging with a network—you are actively shaping the future of decentralized finance with Elastos. This is a call to action for the community to support the network by setting up nodes, providing valuable feedback, and driving the BeL2 network toward a successful market launch in 2025. Detailed instructions on how to set up a node will be published before December 30th, supporting the launch of the Arbiter Beta network. Did you enjoy this article? To learn more, follow Infinity for the latest updates here!


Human Colossus Foundation

Switzerland: E-ID set to go live in early 2026

On December 6, 2024, the Swiss Federal Council made decisions regarding the technical implementation of the Confederation's new electronic identity proof (e-ID) and the underlying operational infrastructure. A press release[1] describes a two-stage launch of the e-ID, with the first delivery planned for 2026. The first stage will introduce a technology used by the European Union. At the same t

E-ID participation meeting, Zollikofen 2024.12.06

Human Colossus Foundation’s Dynamic Data Economy perspective

On December 6, 2024, the Swiss Federal Council made decisions regarding the technical implementation of the Confederation's new electronic identity proof (e-ID) and the underlying operational infrastructure. A press release [1] describes a two-stage launch of the e-ID, with the first delivery planned for 2026. The first stage will introduce a technology used by the European Union. At the same time, work will continue to develop additional solutions that could be used in a second stage to meet even higher privacy protection requirements, in particular the requirement that the various uses of the e-ID not be traceable to an individual.

DVS4U: Integration des E-ID Ökosystems in kantonale und kommunale Systeme

On the same day, representatives of the Human Colossus Foundation attended the annual hybrid participation meeting of the project E-ID, which took place in the Federal Office of Information Technology, Systems and Telecommunication (FOITT) buildings in Zollikofen. The Human Colossus Foundation's (HCF) contribution is the technology for securing and styling the visualisation of the E-ID on mobile devices. E-ID pilots have already implemented HCF's Overlays Capture Architecture (OCA) [2] in different contexts (see for example canton Thurgau proof of concept [3]. The Foundation will continue to support the E-ID team on semantic harmonisation for a stylish but secured visualisation of digital proofs. 

However, our goal at the Human Colossus Foundation is to promote and support E-ID projects beyond the semantic realm. We support E-ID and public service initiatives in Switzerland and abroad. Our Dynamic Data Economy (DDE) approach anticipates the future implementation of national infrastructure components, including distributed governance and decentralised authentication. These technologies go beyond the building of complex verifiable credential use cases. They enable an ecosystem approach, helping integrate many providers of digital proofs, further enabling diverse use cases for E-ID.

We welcome the two-stage approach of the project that confirms a go-live for 2026 while activating research and development for higher security and privacy.

References:

[1] Swiss Federal Council December 6 2024 press release https://www.admin.ch/gov/en/start/documentation/media-releases.msg-id-102922.html

[2] OCA website and specification https://oca.colossi.network/

[3] Canton Thurgau DVS4U: Integration des E-ID Ökosystem in kantonale and kommunale Systeme: https://github.com/e-id-admin/general/blob/main/meetings/20241206_E-ID-Partizipationsmeeting_DVS4U_DE.pdf

Other Informations

Swiss Digital Identity and Trust infrastructure blog posts

Public Beta is Open Source: https://www.eid.admin.ch/en/public-beta-ist-open-source-e

SWIYU – Notes on the design and name of the e-ID and trust infrastructure: https://www.eid.admin.ch/en/swiyu-e

Project E-ID Git-Hub: https://github.com/e-id-admin

The Human Colossus Foundation is a neutral but technology-savvy Geneva-based non-profit foundation under the surveillance of the Swiss federal authorities. 

Subscribe to our newsletter

Monday, 09. December 2024

DIDAS

Call to Participate in a Survey: Identifying Use Cases – A Critical Step for Digital Proof Ecosystems in Switzerland

In an increasingly interconnected world, digital processes must function seamlessly without interruptions. However, one element is indispensable for this: trust in digital data. Without trust, we face uncertainty, additional validation efforts, and delays in business processes. How can we ensure data integrity, prevent manipulation, and efficiently meet regulatory requirements? And how can we achie

In an increasingly interconnected world, digital processes must function seamlessly without interruptions. However, one element is indispensable for this: trust in digital data. Without trust, we face uncertainty, additional validation efforts, and delays in business processes.

How can we ensure data integrity, prevent manipulation, and efficiently meet regulatory requirements? And how can we achieve a state where information flows seamlessly between organizations without the need to question every document?

This is precisely the focus of our survey. By participating, you will help us better understand challenges and needs. Every perspective matters: whether you’re in leadership, IT, legal, or finance, your experiences and insights will directly contribute to making digital processes safer, more trustworthy, and easier to implement.

How can you help? Participate: Take a few minutes to complete our survey. Spread the word: Share the survey within your network. Every additional voice gives the results greater weight and impact.

Together, we can make digital data so trustworthy that it forms the foundation for seamless, efficient, and secure business processes.

Join now: [Participate here!]

Thank you for your support!

Contacts:

Dr. Roman Zoun is a board member of DIDAS and leads the Adoption Working Group. Professionally, he is with Swisscom, where he is responsible for the Digital Wallet division, focusing on the promotion of Self-Sovereign Identity (SSI) and digital trust infrastructures. As an expert in digital identities, data protection, and IT security, he brings extensive experience in identity and access management.In addition to his role at Swisscom, he serves as a board member of the OpenWallet Foundation, contributing to the development of secure and interoperable digital wallets. Dr. Zoun earned his Ph.D. in Computer Science from the Otto von Guericke University Magdeburg, with a strong academic background in cloud computing and mass spectrometry. His passion for innovation and commitment to trustworthy digital solutions make him a key figure in the field.

Jan Carlos Janke teaches and conducts research as a senior academic staff member on topics related to Digital Business Innovation, focusing on Blockchain, SSI, Digital Identities, Digital Trust, IT Management, and AI. As the Community Manager of DIDAS and Co-Lead of the Digital Identities Short Course and CAS Blockchain, he contributes to education in digital identity, data sovereignty, and blockchain technologies.He holds dual master’s degrees in Management and Finance from the European Business School in Wiesbaden and the EADA Business School in Barcelona. With nearly ten years of experience in the German financial sector, Janke was previously Head of Business Development at the Frankfurt School Blockchain Center under Philipp Sandner before joining HSLU.

More to Read? As we step into an era defined by digital transformation, DIDAS is at the forefront, championing the adoption of Self-Sovereign Identity (SSI) and trust infrastructures. Our vision is clear: a Switzerland with more privacy and less friction in the digital realm. Here’s how we’re working to make this vision a reality. Why We Do What We Do At DIDAS, our purpose drives everything we do. We are committed to: Educating, identifying, creating, and improving SSI use cases in Switzerland. Supporting both companies and individuals with an open, customer-centric approach. Building an ecosystem that fosters understanding, innovation, and optimization of Self-Sovereign Identity. Our goal is to tackle the pain points of digital interactions by creating solutions that: Enhance efficiency and effectiveness in digital systems. Establish a common language for trust infrastructures. Improve user experience and reduce friction. Ecosystem Building: A Multi-Dimensional Approach   (Quelle: Grivas, HSLU 2024 Master Business IT - Digital Ecosystems)   DIDAS leverages the diversity of ecosystems to drive adoption and innovation: Open Ecosystems Logic: Diversity of partners for a broad knowledge base. Example: Crypto Valley, Impact Hub, Cardossier. Goal: Knowledge exchange between partners. Controlled Ecosystems Logic: Joint alignment of a few partners under an orchestrator. Example: Helvetia Eco-System HOME, Twint. Goal: Deliver superior value propositions through aligned collaboration. Platform Ecosystems Logic: Harnessing network effects with interchangeable partners. Example: Amazon, AppStore. Goal: Create superior value propositions through network effects. By fostering these ecosystems, DIDAS ensures a balance between openness, control, and platform-driven innovation. Adoption: A Long Journey with High Rewards (Source: Zoun, DIDAS 2024, Adoption Working Group Presentation) The adoption of trust infrastructures is a gradual process but one that promises immense benefits: Authentic data enables better efficiency and liability management. Privacy-friendly solutions redefine the user experience, minimizing friction. Stakeholders gain a competitive edge through participation in cutting-edge digital ecosystems. Together, let’s make the vision of a seamless, privacy-first digital ecosystem a reality. Stay connected and be part of the transformation. Learn more at didas.swiss.

Wednesday, 04. December 2024

Energy Web

Generic Green Proofs Use Case (Applied to the Maritime Industry): Katalist

Every purchase you’ve made until now — whether it’s a pair of shoes, a mobile phone, a laptop, or even exotic fruits — was most likely transported by sea. Globally, the maritime industry is responsible for approximately 2% of overall carbon emissions while serving as a critical component of nearly all industrial companies’ supply chains and transportation systems. Why Katalist? Decarbonizing the
Every purchase you’ve made until now — whether it’s a pair of shoes, a mobile phone, a laptop, or even exotic fruits — was most likely transported by sea. Globally, the maritime industry is responsible for approximately 2% of overall carbon emissions while serving as a critical component of nearly all industrial companies’ supply chains and transportation systems. Why Katalist?

Decarbonizing the maritime industry is a challenging task due to the long lifespan of ships (around 35 years), which limits the speed of fleet replacement. This is why the focus has shifted to the use of low-emission fuels, enabled by a Chain of Custody model called “Book and Claim.”

The system allows for the decoupling of the physical low-emission fuel from its associated low-emission attribute, which can be traded separately. This approach makes low-emission fuels more accessible.

This model serves as an interim mechanism, buying time until global infrastructure is fully in place to support widespread physical use of low-emission fuels without needing to decouple the sustainability attributes of maritime fuel from its actual usage.

How?

To build trust among participants, the Mærsk Mc-Kinney Møller Center for Zero Carbon Shipping and RMI have co-developed and continuously refined a robust methodology that all actors can rely on.

The role of Energy Web Foundation was to provide the technology platform in a way that shortened implementation timelines, enabled hypothesis testing, and allowed for necessary adjustments in a relatively short time compared to building everything from scratch.

Thanks to our Generic Green Proofs approach, we successfully developed two proof-of-concepts, gathering valuable insights that enabled the team to refine the solution into a full production platform — ready to launch in November at COP29 (link to the press release).

What might have taken years to build, we accomplished in months, delivering critical insights that shaped the final platform.

What Companies Can Do on the Platform Issuance
The creation of certificates on Katalist follows a well-thought-out methodology. This ensures that the most relevant data is collected from ships and voyages, along with the appropriate documentation to attest to fuel attributes and usage. Transfer
Participants on the platform can transfer certificates based on predetermined rules and restrictions. Retirements and Export (Claims)
The ultimate goal for companies using the platform is to claim emission savings from alternative fuel usage in their sustainability reports. Public Retirement Table
Depending on the type of company, industry actors can view varying levels of information about actual fuel usage. Once the final actor (Cargo Owner) has claimed the certificate, the data becomes publicly available through the “Public Retirement Table.” Who?

The platform is designed for key maritime industry actors:

Shipping Companies: Operators or owners of ships are responsible for purchasing fuel and uploading relevant data to the platform. Freight Forwarders: Intermediaries in the maritime transportation cycle and vital parts of the supply chain. Cargo Owners: Responsible for the contents of shipments. What Will 2025 Bring?

By the end of the year, we anticipate onboarding several companies to the platform. By early 2025, the first certificates will be issued using Katalist. Additional features will be developed, with updates shared as plans progress.

Conclusion

Energy Web contributes to decarbonization efforts by serving as a climate-tech partner, significantly accelerating the journey from ideas to proof-of-concepts and minimum viable products.

Our expertise lies in creating generic frameworks that enable faster deployments, adaptable to decarbonization plans that drive impactful climate change solutions.

If you’re interested in bringing your ideas to life faster and more efficiently using proven methodologies and connections with relevant industry actors, let’s discuss how Green Proofs can be applied to your use case. contact us!

Generic Green Proofs Use Case (Applied to the Maritime Industry): Katalist was originally published in Energy Web on Medium, where people are continuing the conversation by highlighting and responding to this story.


Next Level Supply Chain Podcast with GS1

Year in Review: 25 Supply Chain Stories That Shaped 2024

This year has been packed with incredible supply chain stories showcasing innovation, collaboration, and inspiring moments in supply chain logistics. In this episode, hosts Reid Jackson and Liz Sertl take you through their favorite conversations of the year, featuring insights from industry leaders like Gena Morgan, Dr. Darin Detwiler, and Chuck Lasley. They discuss key topics that defined the y

This year has been packed with incredible supply chain stories showcasing innovation, collaboration, and inspiring moments in supply chain logistics.

In this episode, hosts Reid Jackson and Liz Sertl take you through their favorite conversations of the year, featuring insights from industry leaders like Gena Morgan, Dr. Darin Detwiler, and Chuck Lasley. They discuss key topics that defined the year—data quality, traceability, retail automation, and the UPC barcode—all while looking ahead to what’s next in 2025.

 

In this episode, you’ll learn:

Key trends that influenced supply chains in 2024

Innovations driving transparency and traceability

Exploring the future of 2D barcodes and data quality

 

Jump into the conversation:

(00:00) Introducing Next Level Supply Chain

(01:48) 2D barcodes, data quality, and GS1 standards 

(05:20) E-commerce and supply chain challenges

(06:37) Improving traceability and food safety

(11:04) The adoption of the barcode and its multiple uses

(13:11) Liz’s favorite episodes 

(15:14) Reid’s favorite episodes

 

Connect with GS1 US:

Our website - www.gs1us.org

GS1 US on LinkedIn

Tuesday, 03. December 2024

DIDAS

Advancing Digital Trust – Insights into the Development of Switzerland’s State E-ID

Watch: A Panel Discussion on the Development of Switzerland’s State E-ID On September 19, 2024, a fact-based discussion took place on the progress of Switzerland’s state electronic identity (E-ID). The network policy evening was organized by the digital society and highlighted the key challenges and opportunities of a project that is set to shape the ...
Watch: A Panel Discussion on the Development of Switzerland’s State E-ID

On September 19, 2024, a fact-based discussion took place on the progress of Switzerland’s state electronic identity (E-ID). The network policy evening was organized by the digital society and highlighted the key challenges and opportunities of a project that is set to shape the country’s digital future. Speakers included Annett Laube, Professor of Computer Science at Bern University of Applied Sciences, Rolf Rauschenbach, Information Officer for E-ID at the Federal Office of Justice, and Daniel Säuberli, President of the Digital Identity and Data Sovereignty Association (DIDAS).

The Vision for a State E-ID

Source: Der Nutzen der E-ID

The state-operated E-ID is more than a technological tool; it is envisioned as a foundation for trust and efficiency in the digital space. Following the rejection of the privately run E-ID proposal in 2021, a state-led solution focusing on privacy, data sovereignty, and user-friendliness has taken center stage. The goal is to create a digital identity that meets citizens’ needs while establishing the technological and legal groundwork for innovative digital services.

Contributions from the Experts

Prof. Annett Laube provided insights into the technological and scientific standards required for the development of the E-ID. She emphasized the importance of ensuring privacy and security through principles like privacy by design and highlighted the critical role of transparency enabled by open-source solutions. User binding is the secure process of linking a digital identity to its rightful owner, ensuring only authorized individuals can access and use the E-ID while maintaining privacy and trust.

Rolf Rauschenbach outlined the strategic and political considerations underpinning the project. He discussed the challenges of implementing a solution that is both technically robust and user-friendly and stressed the need for a clear legal framework.

Daniel Säuberli, representing DIDAS, drew attention to the essential role of trust in digital infrastructures. He emphasized three fundamental principles necessary for the success of the E-ID:

Strengthening Data Sovereignty: Citizens must have full control over their personal data, without unnecessary interference by third parties. Promoting Interoperability: The E-ID must function on both a national and international level to enable seamless cross-border digital interactions. Building Trust Through Transparency: Open communication and active collaboration with the public, private sector, and academia are critical to gaining widespread acceptance for the E-ID. Ambition Levels for the E-ID

Source: Zielbild E-ID

 

The development of the state E-ID is structured around three ambition levels, each reflecting its growing capabilities and societal value:

Basic Functionality: At this level, the E-ID serves as a secure and user-friendly tool for digital identification. It is used primarily for core applications, such as accessing federal online services like tax filings or registry extracts, and ensures reliable authentication for administrative processes. Integration into Cantonal and Municipal Services: The second level expands the E-ID’s use to services provided by cantons and municipalities. The goal is to offer citizens seamless digital access to public services across all administrative levels. Examples include registering address changes, participating in electronic voting, or applying for local permits. This integration transforms the E-ID into a vital part of Switzerland’s public administration. Adoption by the Private Sector: The highest ambition level envisions the E-ID as a universal identification tool for private-sector services. This includes applications in areas such as online banking, e-commerce, and healthcare. By creating a digital ecosystem where the E-ID is widely accepted, it becomes a driver of innovation and digitization across various industries. Connecting Principles to Ambition Levels

The ambition levels of the state E-ID are intrinsically linked to the principles outlined by Daniel Säuberli:

Data Sovereignty forms the foundation of the basic functionality, ensuring citizens retain control over their data. Interoperability becomes essential as the E-ID is integrated into cantonal and municipal services, enabling seamless operation across different platforms. Transparency is vital for the third level, fostering trust and accountability as the E-ID expands into the private sector. Challenges and Outlook

Despite significant progress, critical questions remain: How can the E-ID be effectively integrated into existing systems? How can transparency and privacy be guaranteed? And how can public trust be secured? Addressing these challenges will require collaboration, clear communication, and an unwavering commitment to the outlined principles.

DIDAS: A Driving Force for Digital Trust

For DIDAS, the state E-ID represents a cornerstone for promoting digital trust and innovation. As a platform for digital trust infrastructures, DIDAS is committed to advancing the development of the E-ID and creating the conditions for a sovereign digital society.

With its state-operated E-ID, Switzerland has the opportunity to become a leader in digital identities—rooted in the values of trust, security, and innovation.

Conclusion

The state E-ID offers Switzerland the chance to establish a modern and trustworthy digital ecosystem centered on the needs of its citizens. By strengthening data sovereignty, ensuring interoperability, and fostering transparency, the E-ID is a crucial step forward in the country’s digital transformation.

However, its success depends on transparency, collaboration, and clear communication to build trust and public acceptance. DIDAS plays a pivotal role by bridging stakeholders and promoting the principles of digital trust and data sovereignty. Through a clear vision and open dialogue, Switzerland can set a global example for a digital future built on trust, innovation, and user empowerment.

Monday, 02. December 2024

MOBI

2024 Report: MOBI Milestones

MOBI Milestones in 2024 Completed the build of the first Citopia Node and Citopia Self-Sovereign Digital Twin™ (SSDT™), demonstrating how any third-party services, such as a Zero-Knowledge Proof (ZKP) of location service, can be deployed and utilized on a Citopia Node. MOBI and its members kicked off a three-year initiative [...]

MOBI Milestones in 2024 January 2024: Citopia Node Proof of Concept Completed

Completed the build of the first Citopia Node and Citopia Self-Sovereign Digital Twin™ (SSDT™), demonstrating how any third-party services, such as a Zero-Knowledge Proof (ZKP) of location service, can be deployed and utilized on a Citopia Node.

January 2024: Kickoff of Global Battery Passport (GBP) Minimum Viable Product (MVP)

MOBI and its members kicked off a three-year initiative to develop Citopia Global Battery Passport System (GBPS). This initiative involves leveraging Citopia and Integrated Trust Network (ITN) services to demonstrate secure battery data exchange and identity validation between participants and test core functionalities like selective disclosure (where data is shared only with intended recipients). The goal is to facilitate seamless coordination and communication throughout the battery value chain for enhanced circularity, accountability, efficiency, and regulatory compliance. Read the Citopia GBPS 1-Pager

February 2024: ITN Release 0.3.0 Complete refactoring and architecture of the ITN SSDT to a highly modular design based on the Aries Agent Framework. Upgraded to DIDComm v2 communication protocol. February 2024: Held MoCoTokyo in partnership with AWS and DENSO

On 19 February 2024, MOBI hosted MoCoTokyo in collaboration with Amazon Web Services (AWS) and DENSO, offering a one-of-a-kind summit for industry leaders to network; share solutions; and collectively explore critical challenges and opportunities at the forefront of the circular economy transition. Presentations from key industry players delved into diverse use cases such as decentralized energy systems and electric vehicles, all underscored by the imperative of integrating Web3 technologies for Global Battery Passport (GBP) implementation. Read the event report

March 2024: Joined the eSTART Coalition

The Electronic Secure Title and Registration Transformation (eSTART) Coalition is a group of leading auto industry organizations united in advocating for modern solutions to replace the paper-based processes that currently dominate state and local DMV operations. Modernizing these processes will result in significant cost and time savings for consumers, state and local DMV operations, and industry participants. Read the press release

March 2024: Launched the MOBI Members Portal

The MOBI Members Portal is a hub for our members to access essential meeting minutes, Working Group documents, in-progress deliverables, demos, and more. Read the User Guide

April 2024: ITN Releases 0.3.1

Major upgrades to the network and the ITN SSDT regarding new functionality, code refactoring, documentation extension, and test coverage. In particular, adding Arbitrum One, a public Ethereum Layer 2 scaling solution, also known as an Optimistic Rollup, as a new verifiable data store for the Decentralized Identifiers (DIDs) anchored on the ITN besides Hyperledger Fabric, a private DLT network.

April 2024: Announced Interoperability Pilot with Gaia-X 4 moveID

MOBI and Gaia-X 4 moveID announced a joint initiative to advance cross-industry interoperability. The initiative focused on the joint implementation of two pioneering MOBI standardsMOBI Vehicle Identity (VID) and MOBI Battery Birth Certificate (BBC). More specifically, the initiative centered around linking physical objects — e.g., vehicles and their parts such as batteries — to Web3 digital identities and credentials. Read the press release

May 2024: Completion of Phase I-Stage 1 of the GBP MVP

In Stage 1, implementers demonstrated the ITN identity services of one-to-one cross-validation for battery identity and data. The ITN serves as a federated (member-built and operated) registry for World Wide Web Consortium (W3C) Decentralized Identifiers (DIDs), offering Self-Sovereign Identity (SSI) management for connected entities such as batteries and their value chain participants. Read the press release

June 2024: Kickoff Workshop for Phase I-Stage 2 of the GBP MVP

This pivotal meeting, hosted in partnership with DENSO, brought together member companies united in their commitment to driving circularity, compliance, and resilience across the battery sector through the development of the GBP MVP. Members reviewed the work done in Stage 1 and prepared for Stage 2.

July 2024: ITN Releases 0.3.2

Minor release that updated primarily the ITN storage network besides overall node improvements.

September 2024: ITN Releases 0.3.3

Minor release. Added primarily expanded DID management capabilities and continuous node improvement.

October 2024: Released the Battery Birth Certificate (BBC) Technical Specifications V1.0 The BBC schema is a cornerstone of MOBI’s Global Battery Passport (GBP) system, which is being developed and tested with public and private partners worldwide. This first version outlines the necessary static data attributes for compliance with regulations such as the EU Battery Regulation and the CARB ACC-II Regulation. Access the complete standard October 2024: Published the MOBI Web3 White Paper V4.0

Read the updated White Paper here!

November 2024: Published the draft charter for the Artificial Intelligence (AI) Working Group The rapid evolution of AI presents critical opportunities across the digital landscape. We’ve already begun to see radical changes to business efficiency, data privacy, digital trust, and regulatory compliance—movements that portend seismic market shifts to come. This year, we held several workshops to discuss the potential for an AI Working Group within MOBI. Next year, we’re formally launching the Working Group to discuss AI-related challenges and opportunities, co-develop standards, and monitor pertinent regulations globally. The AI Working Group will be held during MTS meeting hours. Read the draft charter November 2024: Completed the first year of a three-year initiative for the Citopia Global Battery Passport System (GBPS)

In the first year of the GBPS initiative, MOBI and its members concentrated on understanding global regulatory requirements and developing use cases aligned with Web3 technology for secure battery data management. Core technical achievements include the implementation of verifiable credentials, selective data disclosure, track and trace of asset ownership, and secure data exchange. Read the Citopia GBPS 1-pager

December 2024: Released Battery State of Health (SOH) Labeling and Certification White Paper

Battery SOH is an important variable that not only defines the performance of the batteries but also stands as one of the key factors in economic decisions related to the resale, recycling, repurposing, and reuse of such batteries and battery-powered devices. Developed with members of the Electric Vehicle Grid Integration (EVGI) II Working Group, this white paper offers guidance on the current state of practice and proposes a framework for SOH labeling and certification in line with critical regulations. Read the complete White Paper

December 2024: ITN Release 0.3.4 Minor Release. Primarily added support for OpenIDConnect W3C VC issuance draft standard implementation and OpIDConnect W3C VP presentation draft standard implementation. December 2024: Completing Phase I: Stage 2 of the GBP MVP During Phase I, MOBI and its members concentrated on understanding global regulatory requirements and developing use cases aligned with Web3 technology for secure battery data management. Core technical achievements include the implementation of verifiable credentials, selective data disclosure, track and trace of asset ownership, and secure data exchange. By running nodes and testing selective disclosure, the team validated a decentralized framework where sensitive information can be securely shared among authorized parties without risking intellectual property. Building on the success of Phase I (2024), MOBI and its partners are now advancing to Phase II (2025), which will deepen the system’s capabilities for battery data exchange.

The post 2024 Report: MOBI Milestones first appeared on MOBI | The New Economy of Movement.


Energy Web

Green Proofs: a 360° View

Today we will take a step back from our discussions of Green Proofs-powered platforms to take a deeper dive into Green Proofs itself — why it exists, the problems it solves, and how it works Why Green Proofs? In today’s world, corporations are under immense pressure to reduce their carbon footprints, meet regulatory standards, and fulfill consumer demand for greener products. In doing
Today we will take a step back from our discussions of Green Proofs-powered platforms to take a deeper dive into Green Proofs itself — why it exists, the problems it solves, and how it works Why Green Proofs?

In today’s world, corporations are under immense pressure to reduce their carbon footprints, meet regulatory standards, and fulfill consumer demand for greener products. In doing so, they must balance the challenge of providing good and verifiable data that stands up to external scrutiny with protecting their proprietary information and processes.

Corporations are pursuing a variety of sustainability solutions to achieve these goals, which vary in terms of flexibility, cost and compliance with the standards of large-scale enterprises and regulators.

To address these complex sustainability needs, Energy Web developed Green Proofs — a powerfully comprehensive and configurable software solution — to bring deep levels of transparency and verifiability to emerging green products and markets.

Green Proofs technology is built to enable the following:

1. Buy and sell low carbon services and commodities: Whether your company deals in biofuels, green energy, or climate-conscious services, Green Proofs helps you sell and source products that can be verifiably marketed as sustainable.

2. Launch green product registries: Create transparent, scalable, and credible market access in “hard to abate” sectors that can benefit from attribute tracking and support broader decarbonization.

3. Prove your company and products are sustainable: Green Proofs provides tools for tracking and reporting progress toward environmental goals. It integrates granular data, helping you measure Scope 3 emissions and evaluate supplier impact while protecting sensitive information.

Green Proofs technology

Green Proofs is a suite of modular technology solutions that come in the form of turn-key applications or highly customized software built in close partnership with Energy Web.

A core technology underpinning Green Proofs software is the Energy Web Worker node network. Energy Web developed worker nodes to solve a longstanding problem that hindered the advancement of energy tracking solutions: solution logic varied widely according to use case and relied on commercially sensitive data that oftentimes needed to remain private, but the results needed to be transparent and publicly verifiable.

Worker nodes address this problem by allowing enterprises to configure their own computing networks that:

Ingest data from external sources Execute custom logic workflows Vote on results in order to establish consensus without revealing or modifying the underlying data. Publish the consensus to a trusted, public ledger

Worker nodes put enterprises in the driver’s seat of their application, giving them granular control over workflow logic and data inputs. The end result is an enterprise-friendly architecture that provides cryptographic proof that pre-defined rules and processes are being followed correctly, while preserving data privacy and integrity.

It is important to note that while Energy Web has historically been known for its use of blockchain, worker nodes do not exist to store or tokenize certificates and data on the blockchain. The primary use of blockchain in Green Proofs is to serve as a ledger for the worker nodes’ validation results of logic workflows.

Green Proofs Business Applications

Green Proofs helps companies demonstrate that their operations, products, and services are sustainable. It achieves this in three ways:

Helping Companies Buy and Sell Low-Carbon Services and Commodities

Green Proofs streamlines access to low-carbon services and commodities, empowering companies to source materials and services that both support their sustainability goals and enable the marketing/sale of verifiably green products. Currently, Green Proofs is streamlining market access to sustainable aviation fuel, green EV charging, low-carbon shipping services, and climate-aligned Bitcoin mining, making it easier for companies to find the right solutions to reduce their Scope 1, 2, and 3 emissions.

The recently-announced Katalist platform is an excellent example of this business application — using Katalist, maritime freight customers can lower their emissions using a robust book-and-claim system, supporting corporate sustainability claims about themselves and their products.

Launch Green Product Registries

For companies or consortia looking to establish or expand markets for emerging green commodities, Green Proofs offers the ability to launch next-generation Green Product Registries. These registries support transparent, scalable tracking of sustainable goods and services, enabling participants to securely book, trade, and retire digital certificates that represent specific environmental attributes.

By deploying these customized registries, companies can support sustainable product verification, drive growth in emerging green markets, and contribute to decarbonization efforts on a larger scale. With the flexibility and transparency offered by Green Proofs, businesses can lead the way in expanding sustainable options for their industry while building trust with customers and stakeholders.

To learn more about this business application, we invite you to explore the SAFc Registry, where users can obtain certificates representing the use of sustainable aviation fuel, which are then used to credibly claim Scope 3 emissions reductions.

Track and Report Emissions

To truly demonstrate sustainability, companies need more than metrics; they need verifiable data that tracks the environmental impact of their products, operations, and supply chains (scope 1, 2 and 3 emissions). Green Proofs can help companies better collect, track, and report this data in detail, offering insights on both corporate-level and product-specific emissions. We are currently working with industry partners to gather requirements for this business application .

Who can use Green Proofs?

Energy Web built Green Proofs for any corporation or organization that wants to market itself or its products as green. We help customers access Green Proofs in a variety of ways — from no-code and low-code solutions that enable quick, independent launches to providing ongoing design, development, and hosting services for bespoke platforms. To discuss possibilities for applying Green Proofs to your use case, please contact us!

Green Proofs: a 360° View was originally published in Energy Web on Medium, where people are continuing the conversation by highlighting and responding to this story.

Sunday, 01. December 2024

Digital Identity NZ

New Zealand lawyer ‘not surprised’ if Australian laws change for retail biometrics use

Source: Biometric Update Website Digital Identity New Zealand (DINZ) hosted a discussion led by NEC New Zealand on “Facial Recognition and CCTV Integration in Retail Security” this week, with a panel including data privacy, government and legal experts. It comes amidst industry anticipation of the New Zealand Privacy Commissioner’s conclusions on supermarket chain Foodstuff’s results fro

Source: Biometric Update Website

Digital Identity New Zealand (DINZ) hosted a discussion led by NEC New Zealand on “Facial Recognition and CCTV Integration in Retail Security” this week, with a panel including data privacy, government and legal experts.

It comes amidst industry anticipation of the New Zealand Privacy Commissioner’s conclusions on supermarket chain Foodstuff’s results from its trial of facial recognition. The company considers the preliminary findings of the trial encouraging.

In Australia, retail chain Bunnings made headlines after it was found to have breached the country’s privacy laws by using facial recognition. However, the company received some unexpected support when 78 percent of nearly 11,000 respondents supported the company’s use of the technology.

In the discussion, Campbell Featherston, a partner at law firm Dentons New Zealand, mentioned that it is easier to deploy facial recognition technology (FRT) in New Zealand rather than Australia due to differences in law.

Commenting on the Bunnings case, Featherston remarked that the Australian retailer must obtain consent when collecting biometric data such as from FRT.  “The absence of consent under Australian law means that it is very difficult for Bunnings to roll out facial recognition technology,” he said.

“The need for consent doesn’t apply [in New Zealand],” Campbell said, who commented that it wouldn’t surprise him if the privacy law had to change in Australia, such as removing the need for consent in order to accommodate the use of FRT.

Ross Hughson, managing director of Personal Information Management, was asked what was driving the use of FRT amongst retailers. “The key driver is health and safety of staff,” he replied, pointing to managers’ responsibility over their employees’ health and safety.

Featherston elaborated on how best to comply with New Zealand’s Privacy Act 2020, mentioning privacy impact assessments (PIA), having a good understanding of privacy safeguards, having human oversight of the technology, and properly trained staff. He suggested that users of the technology should be sure of the purpose of what they’re trying to achieve through its use, and to avoid “purpose creep.” Transparency is also important and retailers should have notices at the entrance of stores, and “customer-facing documentation,” he said.

The senior lawyer made the point that issues can arise even in the absence of technology. He brought up the example of security guards trying to identify people based on grainy CCTV footage, which could lead to misidentification.

Dr. Vica Papp, principal data scientist at MBIE, was invited to talk about racial biases. This is a chief concern of New Zealand Privacy Commissioner Michael Webster as the accuracy of FRT for minority populations with darker skin is an issue.

Papp recognized that biases exist and the importance of training for staff on unconscious biases. She said that with FRT it can be a physics-based issue rather than that of “race” as people with darker skin tend to reflect less light, picking up on “light receptivity” and how this may impact FRT. But she advised that retailers should find a product that doesn’t show discrimination, that it can handle the specific local population, and to train and test the system on “well-curated data sets.”

The post New Zealand lawyer ‘not surprised’ if Australian laws change for retail biometrics use appeared first on Digital Identity New Zealand.

Thursday, 28. November 2024

The Engine Room

Contribute to our latest project: Social justice organizations based in Africa and Latin America impacted by disinformation campaigns 

We're looking to talk to organizations — especially those working on climate — who have been targeted by disinformation campaigns or impacted by disinformation campaigns. The post Contribute to our latest project: Social justice organizations based in Africa and Latin America impacted by disinformation campaigns  appeared first on The Engine Room.

We're looking to talk to organizations — especially those working on climate — who have been targeted by disinformation campaigns or impacted by disinformation campaigns.

The post Contribute to our latest project: Social justice organizations based in Africa and Latin America impacted by disinformation campaigns  appeared first on The Engine Room.

Tuesday, 26. November 2024

Digital ID for Canadians

Digitizing Traceability of Agriculture and Food – DIACC Special Interest Group Insights

In Fall 2023, the DIACC, in collaboration with Agriculture and Agri-Food Canada and the University of Guelph, launched a Special Interest Group (SIG) focused on…

In Fall 2023, the DIACC, in collaboration with Agriculture and Agri-Food Canada and the University of Guelph, launched a Special Interest Group (SIG) focused on enhancing traceability in the agri-food sector through digital tools. The Digitizing Traceability of Agriculture and Food SIG convened nearly sixty organizations to discuss how emerging technologies, including blockchain, artificial intelligence (AI), and verifiable credentials, can improve transparency and trust across supply chains. Through a series of virtual sessions, stakeholders shared insights on digitization’s role in advancing food traceability and establishing secure data-sharing frameworks.

Download the report here.

DIACC-DTAF-SIG-Report


DIDAS

DICE 2024: Shaping the Future of Digital Trust

The Digital Identity unConference Europe (DICE) 2024 took place this June in Zürich, welcoming global experts to discuss not just secure digital identities but the creation of an integrated ecosystem of trust. Over three sunny days at Trust Square, attendees worked to connect digital trust to real-world economic advantages by focusing on authentic data, verifiable ...

The Digital Identity unConference Europe (DICE) 2024 took place this June in Zürich, welcoming global experts to discuss not just secure digital identities but the creation of an integrated ecosystem of trust. Over three sunny days at Trust Square, attendees worked to connect digital trust to real-world economic advantages by focusing on authentic data, verifiable proofs, and practical applications that go beyond mere identification. This was made possible through the collaboration of Trust Square, DIDAS, and IIW, emphasizing the power of partnerships in advancing digital trust.

A New Direction for Swiss Digital Identity

Bundesrat Beat Jans, head of the Swiss Federal Office of Justice and the leading figure behind Switzerland’s e-ID project, opened the conference with updates on the nation’s progress. Switzerland is advancing its secure digital identity framework while maintaining technology-neutral principles, ensuring future adaptability.

The new e-ID law, under parliamentary review, defines the required trust infrastructure without mandating specific technologies, providing issuers with the flexibility to choose tailored solutions. This approach strengthens market responsiveness, protects investments, and ensures seamless integration with emerging innovations.

However, balancing security and accessibility remains a challenge. By limiting the initial rollout of the e-ID to a government-developed open-source wallet, Switzerland aims to provide robust security while ensuring transparency. Plans to engage with the OpenWallet Foundation and promote open hardware crypto processors reflect Switzerland’s commitment to digital sovereignty and international standardization.

Beyond Identity: Building a Trust Ecosystem

DICE 2024 highlighted that the economic value of digital trust lies in creating a broader ecosystem—not just digital identities. Verifiable credentials, authentic data, and secure proofs were discussed as enablers of trust in industries like healthcare, finance, and logistics.

By emphasizing data integrity, industries can streamline processes, reduce fraud, and build consumer trust. For example, verifiable credentials can authenticate professional qualifications or certifications in real-time, while trusted data channels improve supply chain transparency. DICE underscored the need for governance models to maintain consistency and reliability in such ecosystems, paving the way for more secure, efficient, and scalable solutions.

Economic Implications of Trust Ecosystems

The trust ecosystem proposed at DICE 2024 offers significant economic benefits:

Fraud reduction: Improved verification lowers financial losses. Streamlined compliance: Simplified KYC and AML processes reduce administrative burdens. Efficiency gains: Accelerated verification boosts operational efficiency and customer experiences. Market expansion: Trust ecosystems create opportunities in sectors previously hindered by security concerns. Innovation stimulation: Open standards encourage the development of interoperable solutions.

By linking trust-building efforts to tangible economic outcomes, DICE 2024 shifted the conversation from abstract concepts to measurable benefits.

The DICE unConference Experience

The open and collaborative format allowed attendees to co-create the agenda, ensuring discussions addressed real-world challenges. Key topics included

Interoperability: Cross-border and cross-platform systems to streamline international transactions. Zero-knowledge proofs: Privacy-preserving verification methods. Authentic data sharing: Applications in industries like healthcare and finance. User-centric solutions: Accessibility for all users, including those with older technology. Governance frameworks: Clear standards for public-private collaboration.

These discussions highlighted the importance of aligning technological innovation with user needs and industry-specific applications.

Looking Ahead: DICE 2025

To build on the momentum of 2024, DICE has announced two major events for 2025:

DICE Ecosystems | March 4-5, 2025
Focus: Deploying verifiable credentials and authentic data across business ecosystems.

Collaborate with sector leaders. Build partnerships for adoption strategies. Develop cross-sector production use cases.

2. DICE 2025 | September 2-4, 2025
Focus: Advancing technologies and frameworks for digital identity and trust.

Deep dives into core technologies and governance. Open Space format for agenda co-creation. Exploration of privacy-preserving solutions and global standards. Aligning Digital Trust with Economic Benefits

DICE 2024 clarified that digital identity and trust are not standalone constructs—they drive real economic advantages. By linking trust ecosystems to fraud reduction, compliance efficiency, and market expansion, the conference demonstrated how secure systems empower businesses and consumers alike.

The Path Forward

While the digital trust landscape evolves, collaboration and innovation remain vital. As DICE continues to shape the agenda, the focus will be on actionable strategies that enhance security, scalability, and inclusivity.

For more information and updates on upcoming events, visit the DICE official website. Let’s continue to collaborate and innovate to forge a digital future that benefits everyone.

DIDAS HSLU Danube Tech GmbH zkdid Budapest University of Technology and
Economics
Switch Berner Fachhochschule TTIAG Loughborough University Fraunhofer-Institut für Angewandte Informationstechnik FIT Lissi GmbH armasuisse W+T TangleLabs UG ZKorum SAS Valtioneuvosto ja ministeriöt Polygon Labs Polygon ID Bundesamt für Sozialversicherungen Tipolis Blokverse Decentralized Identity Foundation Digital Catapult Idyllicvision Catenable AG Kallistech Stadt Karlsruhe – Amt für Informationstechnik und Digitalisierung ICO Consult Gataca Labs S.L. Blockchainbird PXL Vision AG NOUMENA DIGITAL AG Meeco Kosma Connect GmbH Hygiaso AG KeyState.capital Swiss Safe AG SICPA SA Roche Diagnostics International Ltd. ProSapien LLC Unifiedpost Dutch BlockchainCoalition Ergon Informatik Abdagon AG More than Bits GmbH HID Global SAS SwissSign AG INNOPAY Vereign AG Veridos GmbH iC Consult GmbH Schweiz youniqx Identity AG Hushmesh Inc. AKB Bundesdruckerei GmbH KPMG Skribble AG ETH Zürich Fondazione Bruno Kessler Fraunhofer IAO Center for Digital Trust, C4DT – EPFL Robert Bosch GmbH Stadt Köln Animo Solutions Digital Trust Ventures Cloud CompassComputing Inc. EPFL Global Legal Entity Identifier Foundation (GLEIF) Save My Identity / Blockchain for Human Rights Easy Dynamics Corp Tuconic GmbH w3-ff Venture Builder GmbH AYANWORKS Sphereon Civic Technologies identinet GmbH Provenant Hypermine Technologies Private Limited Federal Office of Justice, Switzerland TNO mykin.ai Swisscom (Schweiz) AG Bundesamt für Informatik und Telekommunikation Adnovum Eraneos Switzerland AG U.S. Department of Homeland Security Cardossier Biznet Trust Over IP Foundation SICPA esatus Schweiz AG digitalswitzerland cheqd Validated ID Procivis DFINITY Foundation

 

 

MAIN SPONSOR: Associate sponsors & partners: fundamentals sponsors: govermental endorsers:

 

Sponsors keep conference fees low, by supporting the virtual platform, unConference set-up, providing meals and more, making DICE available to all who want to attend, participate and contribute.

If you are interested in becoming part of the growing community of Sponsors supporting DICE and the real time work that happens at this event, please contact the Trust Square team.

 

 

Friday, 22. November 2024

Energy Web

Celebrating One Year of the SAFc Registry: A Look Back and Forward

One year ago, during COP, we proudly unveiled the SAFc Registry. Soon after, we launched it into production alongside our visionary partners RMI, EDF, and SABA. It’s been an incredible journey, and as we mark this milestone, I wanted to reflect on where we started, how far we’ve come, and where we’re heading in the sustainable aviation fuel (SAF) book and claim industry The SAFc Registry rep
One year ago, during COP, we proudly unveiled the SAFc Registry. Soon after, we launched it into production alongside our visionary partners RMI, EDF, and SABA. It’s been an incredible journey, and as we mark this milestone, I wanted to reflect on where we started, how far we’ve come, and where we’re heading in the sustainable aviation fuel (SAF) book and claim industry

The SAFc Registry represents more than just a technical solution that Energy Web developed — it’s a mission-driven effort to decarbonize aviation. As a non-profit-driven book and claim registry, it’s designed to ensure transparency, accountability, and scalability in the deployment of SAF. From the outset, our goal has been to create a trusted system for tracking and verifying SAF certificates, enabling the whole aviation supply chain to credibly participate in the energy transition.

A Groundbreaking Beginning

When we launched the SAFc Registry, it was a first in many ways for Energy Web. It was the inaugural implementation of a Green Proofs registry, setting the stage for the recent launch of Katalist, the Green Proofs registry we built for sustainable maritime shipping. The SAFc Registry also broke new ground with the innovative use of worker node technology for data validation — a distributed, multi-party approach that ensures reliability and trust.

Energy Web’s role initially focused on providing the technology behind the registry, but as the platform matured, so did our involvement. We’ve since stepped into the role of SAFc Registry Administrator, cementing our commitment to driving this critical initiative forward day-to-day.

Building Momentum

While the launch in Q4 2023 was exciting, the real momentum picked up in Q1 2024, and the past year has been a whirlwind of activity and growth:

Engagement and Education: We held countless introduction calls and demos, spreading awareness about the registry and its potential impact across the aviation and energy industries. Global Adoption: Hard work paid off as we onboarded 50 companies from across the globe and a diverse range of industries. Onboarding and Operations: We refined our onboarding pipeline and operational processes, ensuring a smooth experience for users. Impact in Numbers: To date, the SAFc Registry has facilitated over 50 SAFc issuances, representing over 3,000 tonnes of SAF that has been produced and is at work displacing the use of conventional jet fuel. Continuous Improvement: We’ve listened closely to user feedback, rolled out new features (including an API for registry power users), resolved bugs, and refined our terms and conditions to better meet user needs. Governance: Convening our governing board for the first time was a key step in formalizing the policies and processes that will guide the registry’s growth into the future. Lessons Learned

Like any pioneering effort, launching the SAFc Registry came with its share of challenges and learning opportunities. The v1 version of the registry taught us what worked — and what didn’t. These lessons have been invaluable as we’ve evolved the platform to better serve our users. The enhancements we’ve deployed reflect our commitment to listening, learning, and adapting.

Looking Ahead

Our work is far from over. Just this week, we’ve rolled out a new statistics section to the homepage providing users and the broader public with a clearer view of registry activity. It’s just one of many features in development as we work through a robust backlog of user-requested improvements and policy updates from our governing board.

The future of SAF is bright. Announced SAF projects are expected to increase the global supply by more than 10x by 2030. The SAFc Registry is poised to play a central role in supporting this growth, ensuring the scalability and credibility of SAF adoption on a global scale.

A Heartfelt Thank You

None of this would be possible without the incredible support of our partners, users, and team. As we celebrate this milestone, we’re filled with gratitude and optimism for what lies ahead. Here’s to the continued success of the SAFc Registry and the advancement of sustainable aviation.

Let’s keep flying higher — together.

About Energy Web
Energy Web is a global technology company driving the energy transition by developing and deploying open-source decentralized technologies. Our solutions leverage blockchain to create innovative market mechanisms and decentralized applications, empowering energy companies, grid operators, and customers to take control of their energy futures.

Celebrating One Year of the SAFc Registry: A Look Back and Forward was originally published in Energy Web on Medium, where people are continuing the conversation by highlighting and responding to this story.


The Engine Room

[CLOSED] Join our team! We’re looking for our next Associate for Communications

The Engine Room is seeking an experienced, curious, and team-oriented communications person to assume the evolving role of leading communications at The Engine Room. The post [CLOSED] Join our team! We’re looking for our next Associate for Communications appeared first on The Engine Room.

The Engine Room is seeking an experienced, curious, and team-oriented communications person to assume the evolving role of leading communications at The Engine Room.

The post [CLOSED] Join our team! We’re looking for our next Associate for Communications appeared first on The Engine Room.

Thursday, 21. November 2024

FIDO Alliance

Finextra: Thought Leadership: The Future of Payment Authentication

In this PREDICT 2025 USA interview, Andrew Shikiar, Executive Director and CEO, FIDO Alliance, discusses how the industry has been exploring the death of the password for decades, how this […]

In this PREDICT 2025 USA interview, Andrew Shikiar, Executive Director and CEO, FIDO Alliance, discusses how the industry has been exploring the death of the password for decades, how this conversation has evolved and where we are with passkeys today – pinpointing why making progress with eliminating dependence on passwords is of paramount importance.

Watch the interview with Andrew Shikiar on “The Future of Payment Authentication.”


Project VRM

The Independent Customer

How dow we get from this— To this— ? By making customers independent. Hmm… maybe The Independent Customer should be the title of my follow-up to The Intention Economy. Because, to have an Intention Economy, one needs independent customers: ones who are in charge of their own lives in the digital world: Who they are—to […]

How dow we get from this—

To this—

?

By making customers independent.

Hmm… maybe The Independent Customer should be the title of my follow-up to The Intention Economy.

Because, to have an Intention Economy, one needs independent customers: ones who are in charge of their own lives in the digital world:

Who they are—to themselves, and to all the entities they know, including other people, and organizations of all kinds, including companies. What they know about their lives (property, health, relationships, plans, histories)—and the lives of others with whom they have relationships. Their plans—for everything.: what they will do, what they will buy, where they will go, what tickets they hold, you name it.

Add whatever you want to that list. It can be anything. Eventually it will be everything that has a digital form.

What will hold all that information, and what will make that information safely engageable with other people and entities?

A wallet.

Not a digital version of the container for cash and cards we carry in our purses and pockets. Apple and Google think they own that space already, which is fine, because that space is confined by the mobile app model. Wallets will be bigger and deeper than that.

Wallets will embody two A’s: archives and abilities. Among those abilities is AI: your AI. Personal AI. One that is agentic for you, and not just for the sellers of the world.

Interesting harbinger: Inrupt now calls Solid podswallets.” (Discussion.)

Wallets are how we move e-commerce from a world of accounts to a world of independent customers with personal agency. With AI agents working for them and not just for sellers.

In his latest newsletter, titled ‘A-Commerce’ will be the biggest disruption since the web, and Digital Wallets are the new accounts, Jamie Smith says this:

The Web3 crowd say digital wallets are about transferrable digital assets and ownership without a central authority. And they are right.

But there’s more.

Many payments and identity experts will say that digital wallets are really about identity. Proving who you are and what you are entitled to do (tickets, access). Maybe even with fancy selective disclosure features.

They are also right. But that’s not the whole picture.

A pioneering group of others believe that digital wallets are really about the portability of any verifiable information, and digital authenticity.

And they too are right. We’re now getting much, much closer to what I’m talking about. But there’s still more.

Once individuals can show up independently, with their own digital tools – digital wallets with verifiable, data, identity and digital assets – then we have something new, something special.

It’s a New. Customer. Channel.

Once a business asks for some data from a customer’s digital wallet, they have the opportunity to form a new digital connection with that customer.

A persistent one.

A verifiable one.

A private one.

An auditable, secure and intelligent one.

My goodness, what business wouldn’t want that? Imagine plugging that customer connection directly into business systems and processes, like CRM.

Yes, digital wallets can hold and manage assets. And identity. And portable, verifiable, authentic data.

But with the narrower ‘data and assets’ framing, we risk missing the larger market opportunity.

Digital wallets become the new account.

For everything.

OK so what is an account?

With money, it’s a shared and trusted record of all your transactions. Who did what, who paid what, and who owes who.

With business, it’s a shared record of all your products and interactions. It’s a critical customer channel and interface. The place people come to check things. To ask things. To ‘do business’.

Each customer account has a number. A unique identifier. It has a way to message customers. A way to record what’s been sent to, and received from, the customer.

Ring a bell?

Digital wallets will be able to do all this and much more.

They will also be more secure. More private. More flexible. And more portable.

So it’s possible – I’d even argue more likely – that digital wallets may be more disruptive than browsers were in the 1990s.

But like browsers, they will first be misunderstood.

Digital wallets will become the new account.

For business? For government? For banking? For health? For travel?

For life.

I have said for over a decade that the only 360° view of the customer, is the customer.

Just imagine, once a customer can bring their own wallet – their own account – to each business:

The economics change. Why would a business maintain a complex and proprietary account platform when digital interactions can be handled – indeed automated – via a verifiable digital wallet that’s available on every smart device? The data flows change. Why would a business store unnecessary customer data when they can just ask for it on demand, with consent, from the customer’s digital wallet? Then delete it again once used? The risks change. What if we could reduce fraud and account takeover to near zero, when every customer interaction has to be authenticated via the customer’s digital wallet (likely with biometrics)?

The very fabric of the customer relationship changes.

This is just a glimpse of what‘s possible, and what’s coming. Especially when you tie it to digital AI agents….

When you look closely, you’ll see that digital wallets aren’t even The Thing. They are ‘below the surface’ of the customer channel.

Lots to be written about that. Coming soon.

For now, it’s a simple switch: when you hear ‘account’, just think ‘wallet’.

Here is the challenge: making wallets a must-have: an invention that mothers necessity.

We’ve had those before, with—

PCs word processors and spreadsheets the Net and the Web, graphical browsers personal publishing and syndication smartphones and apps streams and podcasts.

Wallets need to be like all of those: must-haves that transform and not just disrupt.

It’s a tall order, but—given the vast possibilities—one that is bound to be filled.
As for why this won’t be something one of the bigs (e.g. Apple and Google) do for themselves, consider these five words you hear often online:

“Wherever you get your podcasts.”

Those five words were made possible by RSS.

It’s why all of the things in the bullet list above are NEA:

Nobody owns them Everybody can use them Anyone can improve them

When we have wallets with those required features, and they become inventions that mother necessity, we will have truly independent customers.

And we will finally prove ProjectVRM’s prime thesis: that free customers are more valuable than captive ones—to themselves and to the marketplace.


Digital Identity NZ

Council Elections, DISTF Milestone, and End-of-Year Highlights | November Newsletter

A warm welcome to our newest members: api connects, Deloitte, and SushLabs. The post Council Elections, DISTF Milestone, and End-of-Year Highlights | November Newsletter appeared first on Digital Identity New Zealand.

Kia ora,

While the US elections took the media limelight this month, I’d like to remind our members of another election of more local significance—Digital Identity NZ’s annual election to vote candidates onto our Executive Council—so we can continue the drive towards a robust and equitable digital identity ecosystem.

Voting closes next Monday 25th, with successful candidates announced at the Annual Meeting on December 5th.

Register for DINZ Annual Meeting

 Welcoming New Members

A warm welcome to our newest membersapi connectsDeloitte, and SushLabs. It’s great to see our community growing, and we look forward to the fresh perspectives they bring.

Banking in the Spotlight – ANZ and BNZ Announcements

Major corporate members ANZ and BNZ were in the news this month with banking tech-related statements—ANZ announcing its partnership with Qippay, and BNZ announcing its acquisition of Blinkpay soon after the release of its new anti-scam app to reduce fraud

Open Banking is gaining traction in Aotearoa, with 80% of consumer bank accounts now covered by open banking initiatives. This data was surfaced in the recently launched OpenFinanceANZ report and ecosystem map which was supported by our member PaymentsNZ alongside MasterCard, Fintech Australia and our NZTech Group partner FinTechNZ. Check out FinTechNZ’s report highlights here.

DISTF Milestone and DINZ’s Support

Hon. Judith Collins announced the finalisation of the Digital Identity Services Trust Framework (DISTF), with the milestone picked up internationally by Biometric Update. The announcement contained a link to this sub-site for the Trust Framework with new details regarding the accreditation process hitherto unseen by industry.

Despite our initial surprise, DINZ released its own statement in support as we prepare to re-engage with officials. DINZ’s DISTF Working Group is the forum for accreditation discussions and we’re here to support our members with DISTF education. We encourage organisations with views regarding DISTF accreditation to get in touch.

 Code of Practice for Inclusive and Ethical Digital Identity

DINZ is pleased to release an advanced draft of its code of practice for the inclusive and ethical use of digital identity. This code of practice provides a roadmap for ethical and inclusive digital identity practices in New Zealand.

It benefits DINZ members by providing a framework for responsible conduct, and the broader digital identity ecosystem by fostering trust, promoting inclusivity, and ensuring alignment with national and international standards. This code is a work in progress, shared to spark reflection and dialogue. Together, let’s shape a future where digital identity empowers and respects the rights and dignity of all people in New Zealand. 

In order to progress to a published code of practice, we need your feedback. We encourage our members to read the draft and provide feedback here >>

Read Draft Code of Practice

DIA Training Schedule Confirmed

Identification management plays a core role in our work, and members should have a foundational level of understanding about what it is and how it impacts your customers. Good identification management helps reduce and/or prevent fraud, loss of privacy and identity theft, by applying good practices and processes.

Topics covered in DIA Training Courses:

Identification Essentials (G1) Names and other Information (G2) Introduction to Identification Standards (G3) Biometrics 101 (G4)

Here is the schedule of courses between November and February. 

Online learning at your own pace:

Identification Essentials (G1) Learn more > Names and Other Information (G2) Learn more > 

Half-day Zoom courses:

Thursday 5 December | 9am-12pm | G3 & G4 Wednesday 22 January | 9am-12pm | G1 & G2 Wednesday 26 February | 9am-12pm | G3 & G4

Interested in signing up for any of the Zoom sessions? Email identity@dia.govt.nz with the G or HD reference number.  A Zoom link will be supplied to those registered.

 Addressing Bias in Biometrics

Recent media coverage regarding claims of racial bias in facial biometrics has prompted the DINZ Biometrics Special Interest Group to look into the feasibility of a consented dataset of Kiwi faces using its independence to take the role of custodian, facilitating subsequent software testing to address bias and accuracy.

Modest funding will be required so please get in touch if you would like to support this critical work. It’s essential for all public and private sector organisations deploying biometrics. On that note, we’re looking forward to NEC’s webinar next Tuesday. 

 Reflecting on 2024 and Looking Ahead

This is the last newsletter for 2024—published during International Fraud Awareness Week—where progress on Digital Identity is essential for reducing the impact of scams in Aotearoa, as it plays a crucial role in enabling broader participation in the digital economy.

The DISTF Act’s implementation, the Customer Product and Data bill with open banking and possibly electricity as sectors designated for regulationrevisions to the AML regimeNext Generation Payments, the emergence of digital identity acceptance networks, the digital farm wallet for the rural sector, and the digital drivers licence coming ever closer, have all contributed to the growing vibrancy and diversity of an emergent digital trust ecosystem. It’s good progress but there’s more work to do.

We’ll resume our newsletters again after the holidays, so we close the year out with the Coffee Chat in a fortnight. So on behalf of the Executive Council and the DINZ team, we wish you a Meri Kirihimete  me te tau hou.

Ngā mihi,
Colin Wallis
Executive Director, Digital Identity NZ

Read the full news here: Council Elections, DISTF Milestone, and End-of-Year Highlights

SUBSCRIBE FOR MORE


The post Council Elections, DISTF Milestone, and End-of-Year Highlights | November Newsletter appeared first on Digital Identity New Zealand.

Wednesday, 20. November 2024

FIDO Alliance

CISA: USDA Stops Credential Phishing with FIDO Authentication

As the saying goes, malicious actors don’t break in—they log in. There’s a significant truth in that statement. Today, many organizations struggle to protect their staff from credential phishing, a […]

As the saying goes, malicious actors don’t break in—they log in. There’s a significant truth in that statement. Today, many organizations struggle to protect their staff from credential phishing, a challenge that’s only grown as attackers increasingly execute “MFA bypass” attacks. 

In an MFA bypass attack, threat actors use social engineering techniques to trick victims into providing their username and password on a fake website. If victims are using “legacy MFA” (such as SMS, authenticator apps, or push notifications), the attackers simply request the MFA code or trigger the push notification. If they can convince someone to reveal two pieces of information (username and password), they can likely manipulate them into sharing three (username, password, and MFA code or action). 

Make no mistake—any form of MFA is better than no MFA. But recent attacks make it clear: legacy MFA is no match for modern threats. So, what can organizations do? Sometimes a case study can answer that question.

Today, CISA and the USDA are releasing a case study that details the USDA’s deployment of FIDO capabilities to approximately 40,000 staff. While most of their staff have been issued government-standard Personal Identity Verification (PIV) smartcards, this technology is not suitable for all employees, such as seasonal staff or those working in specialized lab environments where decontamination procedures could damage standard PIV cards. This case study outlines the challenges the USDA faced, how they built their identity system, and their recommendations to other enterprises. Our personal favorite recommendation: “Always be piloting”.

FIDO authentication addresses MFA-bypass attacks by using modern cryptographic techniques built into the operating systems, phones, and browsers we already use. Single sign-on (SSO) providers and popular websites also support FIDO authentication. 


Practical Ecommerce: Passkeys Gain Traction with Ecommerce Shoppers

Passkeys allow users to log in to their secure accounts without passwords. Ecommerce businesses were first in line when the FIDO Alliance introduced passkeys in 2022. The trade association, which stands for […]

Passkeys allow users to log in to their secure accounts without passwords. Ecommerce businesses were first in line when the FIDO Alliance introduced passkeys in 2022. The trade association, which stands for Fast ID Online, launched in 2012 with a mission to reduce the world’s password reliance.

Andrew Shikiar, executive director of FIDO, said the past two years have been momentous for members and ecommerce businesses. “You want to attract customers to your site and protect them from account takeover, credential stuffing, and phishing attacks,” he said. “That’s why PayPal, eBay, Amazon, Walmart, Best Buy, and other ecommerce companies were the earliest adopters of passkey payments.”

Shikiar noted that passkey awareness has risen from 39% in 2022 to 57% in 2024, according to a FIDO survey of 10,000 consumers in the U.S., U.K., France, Germany, Australia, Singapore, Japan, South Korea, India, and China.


FIDO Alliance | Free Yourself From Passwords with Passkeys

Watch the video to learn how to go passwordless with passkeys. What is a passkey? A passkey is a FIDO authentication credential based on FIDO standards, that allows a user […]

Watch the video to learn how to go passwordless with passkeys.

What is a passkey? A passkey is a FIDO authentication credential based on FIDO standards, that allows a user to sign in to apps and websites with the same process that they use to unlock their device (biometrics, PIN, or pattern). Passkeys are FIDO cryptographic credentials that are tied to a user’s account on a website or application. With passkeys, users no longer need to enter usernames and passwords or additional factors. Instead, a user approves a sign-in with the same process they use to unlock their device (for example, biometrics, PIN, pattern).

Learn more about the benefits of using passkeys and how to get started with passkeys by visiting the FIDO website: https://fidoalliance.org/passkeys/

For more passkey-related resources, visit passkeycentral.org today: https://www.passkeycentral.org/home


Passkeys Explainer Video | FIDO Alliance

We all know passwords are frustrating to use, and not safe. Passkeys are the replacement for passwords. Strong cryptographic security behind passkeys prevents phishing attacks, reduces security breaches and account […]

We all know passwords are frustrating to use, and not safe. Passkeys are the replacement for passwords. Strong cryptographic security behind passkeys prevents phishing attacks, reduces security breaches and account takeovers. Passkeys make sign-ins fast, simple, and secure. Passkeys also sync easily across all user devices, including new ones. Passkeys for businesses reduce IT time, avoid desktop hassles, and there’s no more costly password resets.

Ready to switch to passkeys?

Visit Passkey Central today to get started: https://www.passkeycentral.org/home


Next Level Supply Chain Podcast with GS1

Spicing up Success: How Traceability Helped Hank Sauce Scale National Distribution

What started as a college project has now become a pantry staple sold in over 5,000 stores.  In this episode, Matt Pittaluga, Co-Founder of Hank Sauce, joins hosts Reid Jackson and Liz Sertl to share how a homemade hot sauce grew into a beloved national brand. Matt explains how traceability and consistency have been key to scaling the business while keeping product quality high. Through

What started as a college project has now become a pantry staple sold in over 5,000 stores. 

In this episode, Matt Pittaluga, Co-Founder of Hank Sauce, joins hosts Reid Jackson and Liz Sertl to share how a homemade hot sauce grew into a beloved national brand.

Matt explains how traceability and consistency have been key to scaling the business while keeping product quality high. Through detailed product codes and a robust production database, Hank Sauce tracks every ingredient from batch creation to store shelves, ensuring full transparency and control.

This meticulous approach to data and process has fueled Hank Sauce’s growth from a local favorite to a nationwide success.

 

In this episode, you’ll learn:

How Hank Sauce scaled its distribution to national retailers The importance of traceability in ensuring food safety and product quality Strategies for building networks to expand brand reach

 

Jump into the conversation:

(00:00) Introducing Next Level Supply Chain

(01:34) The Hank Sauce story

(06:38) Grassroots marketing and early sales strategies

(10:09) Scaling up distribution to large retailers

(13:22) The importance of traceability and food safety

(16:11) Building a brand with a limited marketing budget

(19:21) Advice for new entrepreneurs

(26:30) Matt Pittaluga’s favorite tech

 

Connect with GS1 US:

Our website - www.gs1us.org

GS1 US on LinkedIn

 

Connect with the guest:

Matt Pittaluga on LinkedIn

Check out Hank Sauce

Monday, 18. November 2024

FIDO Alliance

ARC Advisory Group: Wireless Broadband Alliance Integrates OpenRoaming with FIDO Device Onboard to Enable Zero-Touch Framework for IoT Device Onboarding

The Wireless Broadband Alliance (WBA), the global industry body dedicated to improving Wi-Fi standards and services, announced a new framework for WBA integrating OpenRoaming and FIDO Device Onboard (FDO). This […]

The Wireless Broadband Alliance (WBA), the global industry body dedicated to improving Wi-Fi standards and services, announced a new framework for WBA integrating OpenRoaming and FIDO Device Onboard (FDO). This initiative is intended to enable a seamless and secure zero-touch onboarding process for Internet of Things (IoT) Wi-Fi devices.


Fast Company: Say Goodbye to Passwords

It’s been a couple of years since Apple, Google, and Microsoft started trying to kill the password, and its demise seems more likely than ever. The FIDO Alliance, the industry […]

It’s been a couple of years since Apple, Google, and Microsoft started trying to kill the password, and its demise seems more likely than ever.

The FIDO Alliance, the industry group spearheading the passkey push, is putting out some much-needed guidelines to make passkeys usage feel more consistent from one site to the next, and the big tech platforms are getting better at letting you store passkeys in your preferred password manager. Work is also underway on a protocol to let people securely switch between password managers and take all their passkeys with them.

All this is contributing to an air of inevitability for passkeys, especially as major e-commerce players such as Amazon and Shopify get on board. Even if you’re not fully attuned to the passkey movement, you’ll soon have to go out of your way to avoid it.

“Within the next three to five years, virtually every major service will offer consumers a passwordless option,” says Andrew Shikiar, the FIDO Alliance’s CEO and executive director.


The Engine Room

Dec 3 – Join our online event: Alternative social media platforms for social justice organizations 

Join our report launch! The post Dec 3 – Join our online event: Alternative social media platforms for social justice organizations  appeared first on The Engine Room.

Sunday, 17. November 2024

Project VRM

ONDC, Beckn, and VRM

If we want VRM to prove out globally, we have to start locally. That’s what’s happening right now in India, using ONDC (the Open Network for Digital Commerce), which runs on the Beckn protocol. ONDC is a happening thing: One big (and essential) goal for VRM is individual customer scale across many vendors.  ONDC and […]

This is important. Be there.

If we want VRM to prove out globally, we have to start locally. That’s what’s happening right now in India, using ONDC (the Open Network for Digital Commerce), which runs on the Beckn protocol.

ONDC is a happening thing:

One big (and essential) goal for VRM is individual customer scale across many vendors.  ONDC and Beckn are for exactly that. Here is how kaustubh yerkade explains it in Understanding Beckn Protocol: Revolutionizing Open Networks in E-commerce:

Beckn protocol in the Real World
The Beckn Protocol is part of a larger movement toward creating open digital ecosystems, particularly in India. For example, the ONDC (Open Network for Digital Commerce) initiative in India is built using the Beckn protocol, aiming to democratize e-commerce and bring small retailers into the digital economy. The Indian government supports ONDC for making digital commerce more accessible and competitive.

Here are some practical examples of how the Beckn Protocol can be used in different industries:

1. Ride-Hailing and Mobility Services
Example: Imagine a city with multiple ride-hailing services (e.g., Uber, Ola, Rapido). Instead of using individual apps for each service, a user can use one app powered by the Beckn Protocol. This app aggregates all available ride-hailing services, showing nearby cars, prices, and estimated arrival times from multiple providers. The user can choose the best option, book the ride, and pay directly through the unified app.

Benefit: Service providers gain broader visibility, and users can easily compare services in one place without switching between apps.

https://becknprotocol.io/imagining-mobility-with-beckn/

2. Food Delivery Services
Example: A consumer uses a food delivery app that leverages Beckn to show restaurants from multiple food delivery services (like Zomato, Swiggy, and local food delivery providers). Instead of sticking to just one platform, the user sees menus from different services and can order based on price, availability, or delivery time.

Benefit: Restaurants get listed on more platforms, increasing their exposure, and users can find more options without hopping between different apps.

3. E-Commerce and Local Retail
Example: A shopper is looking for a product (like a phone charger) and uses an app built on the Beckn Protocol. The app aggregates inventory from big e-commerce players (like Amazon or Flipkart) as well as small local retailers. The user can compare prices and delivery times from both big platforms and nearby local stores, then make a purchase from the most convenient provider.

Benefit: Small businesses and local stores can compete with larger e-commerce platforms and reach a wider audience without needing their own app or website.

4. Healthcare Services
Example: A patient needs to book a doctor’s appointment but doesn’t want to manually search through different healthcare platforms. A healthcare app using Beckn shows available doctors and clinics across multiple platforms (like Practo, 1mg, or even independent clinics). The patient can choose a doctor based on location, specialization, and availability, all in one place.

Benefit: Patients get access to a larger pool of healthcare providers, and doctors can offer their services on multiple platforms through a single integration.

5. Logistics and Courier Services
Example: An online seller wants to ship products to customers but doesn’t want to manage multiple courier services. With an app built on Beckn, they can see delivery options from multiple logistics providers (like FedEx, Blue Dart, and local couriers) and choose the best one based on cost, speed, or reliability.

Benefit: Businesses can streamline shipping operations by comparing various logistics providers through one interface, optimizing for cost and delivery time.

6. Public Transportation
Example: A commuter is planning a trip using public transit in a city. Using a Beckn-powered app, they can view transportation options from multiple transit services (like metro, bus, bike-sharing services, or even ride-hailing). The app provides real-time schedules, available options, and payment methods across different transport networks.

Benefit: The commuter has a unified experience with multiple transportation modes, improving convenience and access to more options.

7. Local Services (Home Services, Repair, Cleaning)
Example: A user needs a home repair service (e.g., a plumber or electrician). Instead of browsing different service provider platforms (like UrbanClap or Housejoy), a Beckn-enabled app aggregates professionals from multiple service providers. The user can compare prices, reviews, and availability and book a service directly from the app.

Benefit: Service providers get access to more customers, and consumers can quickly find professionals based on location, reviews, and price.

8. Travel and Hospitality
Example: A traveler uses a travel booking app based on Beckn to find accommodations. The app aggregates listings from various hotel chains, Airbnb, and local guesthouses. The traveler can filter by price, location, and amenities, then book the best option without switching between platforms.

Benefit: Smaller accommodation providers can compete with big brands, and travelers get access to more choices across different platforms in one app.

9. Government Services and Civic Engagement
Example: A citizen uses a Beckn-enabled app to access multiple government services. They can apply for a driver’s license, pay taxes, and book a health checkup at a government hospital—all from one platform that integrates services from different government departments and third-party providers.

Benefit: Governments can offer a unified experience across various services, and citizens get easier access to public services without visiting multiple websites or offices.

He adds,

The ONDC (Open Network for Digital Commerce) initiative in India is built using the Beckn protocol, aiming to democratize e-commerce and bring small retailers into the digital economy. The Indian government supports ONDC for making digital commerce more accessible and competitive.

While it is nice to have government support, anyone anywhere can deploy open and decentralized tech, or integrate it into their apps and services.

On Tuesday we’ll have a chance to talk about all this at our latest salon at Indiana University and live on Zoom. Our speaker, Shwetha Rao, will be here in person, which always makes for a good event—even for those zooming in.

So please be there. As a salon, it will be short on lecture and long on dialog, so bring your questions. The Zoom link is here.

 

 

Friday, 15. November 2024

FIDO Alliance

Daily Mail: Top 10 passwords used in the United States revealed – stop using them immediately if they’re yours

Experts discovered the top 10 overused passwords in the US that could put you at risk of being easily hacked. NordPass and NordSteller recently released its sixth annual analysis of personal […]

Experts discovered the top 10 overused passwords in the US that could put you at risk of being easily hacked.

NordPass and NordSteller recently released its sixth annual analysis of personal password habits.

Based on NordPass and NordStellar’s data they crunched, ‘secret’ was the most common password in the US.

The management platforms found that the password was used 328,831 times, and it would take less than one second for someone to crack it.

‘Secret’ is also ranked in the top 10 most common passwords in the world.

Andrew Shikiar, executive director of FIDO Alliance, mentioned hackers could guess the password if it’s even spelled using numbers or with other substitutions while speaking with CNBC.

‘For example, they might believe that “secret” is a weak password but “s3cr3t” will be hard to guess,’ Shikiar said in 2019. 

Thursday, 14. November 2024

FIDO Alliance

Branch enhances security and user experience with passkey implementation

Corporate Overview Branch® is a cloud-native home and auto insurance company founded in 2020. Operating on a serverless architecture, Branch’s mission is to simplify the insurance purchasing experience for consumers […]

Corporate Overview

Branch® is a cloud-native home and auto insurance company founded in 2020. Operating on a serverless architecture, Branch’s mission is to simplify the insurance purchasing experience for consumers and independent insurance agents.

“One of our key superpowers is making the insurance buying experience as easy as possible,” explained Arkadiy Goykhberg, Chief Information Security Officer at Branch.

Branch Authentication Challenges

Due to the sensitive nature of their market and the variety of stakeholders they served, Branch faced multiple authentication challenges:

Legacy two-factor authentication. Branch has been relying on SMS-based two-factor authentication, which has multiple issues. Telco issues would prevent users from logging in. It’s also not phishing resistant and subject to risk associated with SIM swapping attacks. Customer support volume. There was a high volume of support tickets related to password resets and login issues. User-friendly approach. Branch needed a more secure and user-friendly authentication process to serve their 12,000+ independent insurance agents. Compliance. Another core challenge was the need to meet strict compliance requirements in the highly regulated insurance industry. How Passkeys Addressed Branch’s Challenges

Branch identified passkeys as the solution to their authentication problems for several reasons.

Enhanced Security: Passkeys are inherently phishing-resistant, addressing the vulnerabilities associated with SMS-based authentication.

Improved User Experience: Passkeys eliminate the need for passwords, reducing friction during login and preventing issues related to forgotten passwords or typing errors.

Reduced Support Burden: By implementing passkeys, Branch saw a significant reduction in support tickets. John MaGee, Software Product Manager at Branch, noted, “We did see our support ticket volume drop by about half, which was the key business goal, outside of some of the user experience and security goals of the project.”

Regulatory Compliance: Passkeys provided a strong foundation for meeting current and future regulatory requirements in the insurance industry.

Compatibility with Existing Infrastructure: Passkeys integrated well with Branch’s cloud-native architecture, allowing for a smoother implementation process.

Implementation process and results Branch adopted a phased approach to implementing passkeys.

The first phase involved internal testing. Branch first implemented passkeys for internal use, which helped build confidence and user acceptance. Branch then went through a vendor selection and development phase, contracting with Descope. Branch decided that it was a more efficient approach to engage with a service provider to help with passkey implementation.

The project roadmap included a two month vendor selection process, followed by a three-month development phase and a six-week end-user migration phase.

The final step was a phased user migration. Branch rolled out passkeys to its agents in waves, starting with a small group and gradually scaling up. The onboarding process involved multiple communication campaigns to prepare users for the new authentication experience. The user journey included prompting users to set up passkeys and providing a fallback option of email and OTP. The goal was to ensure a seamless transition and reduce support ticket volume by eliminating password resets. This approach allowed the company to refine the process based on feedback and minimize risks.

The results of the passkey implementation were impressive:

25% passkey adoption rate across the organization, exceeding internal goals. 50% reduction in support ticket volume related to authentication issues. Maintained steady login failure rates at 5%, despite the transition. Improved user experience, with fewer frustrations related to authentication.

One surprising benefit was the high compatibility of passkeys with existing hardware and software. Goykhberg said that he had initially expected that only approximately 60% of systems would support passkeys.

“That hypothesis was wrong. To my surprise, only a few devices across thousands of logins could not support passkeys,” he said.

Branch’s passkey success and future roadmap

Branch’s successful implementation of passkeys has not only addressed their current authentication challenges but also laid the groundwork for future improvements and expansions.

Goykhberg said:
“Descope’s flexible workflow made implementing passkeys and taking care of edge cases relatively straightforward. With conditional steps, we routed users to passkeys when their hardware or software were compatible, and routed them to fallback MFA options when passkeys couldn’t be supported. Visualizing the user journey as a workflow helps us audit and modify the registration
and authentication journey without making significant code changes, which sets us up well for the future.”

The company’s successful phased rollout approach, starting with internal adoption and then gradually expanding to their agent base, highlights the importance of incremental implementation and learning. This strategy will continue to inform their future authentication initiatives. Building on the initial success of 25% passkey adoption, Branch aims to increase this number through targeted experimentation and user education.

Branch’s successful implementation of passkeys demonstrates how this modern authentication method can significantly improve both security and user experience in the insurance industry. By addressing the vulnerabilities of traditional authentication methods,
reducing support burden and providing a seamless user experience, passkeys have proven to be a valuable solution for Branch’s authentication needs.

Read the Case Study

Origin Trail

Trace Labs, Core Developers of OriginTrail, Welcomes Toni Piëch and Chris Rynning to the Advisory…

Trace Labs, Core Developers of OriginTrail, Welcomes Toni Piëch and Chris Rynning to the Advisory Board Zürich, Switzerland — November 14, 2024 Trace Labs, the core builders behind the OriginTrail ecosystem, is pleased to announce the expansion of its advisory board with the addition of Toni Piëch and Chris Rynning. Both esteemed leaders bring extensive experience in fostering human-ce
Trace Labs, Core Developers of OriginTrail, Welcomes Toni Piëch and Chris Rynning to the Advisory Board

Zürich, Switzerland — November 14, 2024

Trace Labs, the core builders behind the OriginTrail ecosystem, is pleased to announce the expansion of its advisory board with the addition of Toni Piëch and Chris Rynning. Both esteemed leaders bring extensive experience in fostering human-centric technology, investment, and innovation, further positioning Trace Labs to drive trusted advancements in Artificial Intelligence (AI) and sustainable digital solutions across multiple sectors, including healthcare, construction, and mobility.

The OriginTrail ecosystem, built on decentralized knowledge graph technology, is committed to leveraging AI in a responsible and sustainable manner. By joining the advisory board, Toni and Chris will help shape Trace Labs’ vision for harnessing AI to positively impact industries while advocating for ethical, human-centered applications of technology.

Toni Piëch

Toni Piëch, a serial entrepreneur and 4th generation member of the Piëch-Porsche family, brings a unique blend of global experience and vision for developing a trusted technology ecosystem. Currently based in Luzern, Switzerland, Toni’s contributions to technology and sustainability are reflected both through the Anton Piëch Foundation (https://www.tonipiechfoundation.org/) and his broad technology investment activities, investing both in venture capital funds and directly in people and companies. A graduate of Princeton University with a background in East Asian Studies, Toni spent twelve years in China before returning to Europe to further his philanthropic and investments efforts that can make significant contributions to a better and safer world.

Toni’s LinkedIn

Chris Rynning

Chris Rynning, an economist and investment professional, brings decades of expertise in venture capital and global markets. A resident of Zurich, Switzerland, Chris is a seasoned investor with a background in mergers & acquisitions, public/private market investing, and is currently the managing partner of the Piëch-Porsche family office AMYP Ventures. A graduate of ESSEC in Paris, Chris also holds an MBA in Finance and Economics from the University of Chicago. His influence spans across Asia, US, and Europe, where he has lived and served as an investor and advisor to scale-up companies, while maintaining a thought leadership role in AI, cryptocurrencies, and blockchain. Chris also authored a book on the topic in 2018.

Chris’s LinkedIn

Toni and Chris join a prestigious advisory board that includes Dr. Bob Metcalfe, Ethernet inventor, and Turing Award winner; Greg Kidd, founder of Hard Yaka; and Ken Lyon, global logistics expert. Together, this board will support Trace Labs’ mission of pioneering decentralized solutions that power trust and transparency.

For further information, please contact:
lucija.naranda@tracelabs.io

Trace Labs, Core Developers of OriginTrail, Welcomes Toni Piëch and Chris Rynning to the Advisory… was originally published in OriginTrail on Medium, where people are continuing the conversation by highlighting and responding to this story.


FIDO Alliance

The Associated Press: One Tech Tip: Replacing passwords with passkeys for an easier login experience

You might have noticed that many online services are now offering the option of using passkeys, a digital authentication method touted as an easier and more secure way to log […]

You might have noticed that many online services are now offering the option of using passkeys, a digital authentication method touted as an easier and more secure way to log in. 

Some 20% of the world’s top 100 websites now accept passkeys, said Andrew Shikiar, CEO of the FIDO Alliance, an industry group that developed the core authentication technology behind passkeys.

Passkeys first came to the public’s attention when Apple added the technology to iOS in 2022. They got more traction after Google started using them in 2023. Now, many other companies including PayPal, Amazon, Microsoft and eBay work with passkeys. There’s a list on the FIDO Alliance website.

Still, some popular sites like Facebook and Netflix haven’t started using them yet.

Passkey technology is still in the “early adoption” phase but “it’s just a matter of time for more and more sites to start offering this,” Shikiar said.

Wednesday, 13. November 2024

Blockchain Commons

Musings of a Trust Architect: Building Trust in Gradients

Progressive Trust—it sounds a bit like something from a relationship advice column, right? But in the world of digital interactions, it’s actually a revolutionary model, one that moves us away from “all-or-nothing” choices into a more human, flexible way of establishing trust. Progressive Trust is about mirroring the natural ways we build trust in real life, adding depth and resilience to our digit

Progressive Trust—it sounds a bit like something from a relationship advice column, right? But in the world of digital interactions, it’s actually a revolutionary model, one that moves us away from “all-or-nothing” choices into a more human, flexible way of establishing trust. Progressive Trust is about mirroring the natural ways we build trust in real life, adding depth and resilience to our digital interactions.

“The basic idea behind progressive trust is to model how trust works in the real world”
—Christopher Allen, Musings of a Trust Architect: Progressive Trust (December 2022)

In real life, trust doesn’t happen at the click of a button. It’s a process. You don’t start a friendship, a business deal, or a marriage with complete openness or blind trust. Instead, what you reveal is initially minimized, and then trust builds up gradually. As we share experiences, we reveal more, bit by bit, learning through consistent responses from the other person. When it comes to digital relationships, whether they’re between people, devices, or other entities, why should things be any different?

Why Progressive Trust Matters Today

The internet didn’t start off so polarized. Back in the early days, you could slowly get to know people online, like on message boards or MUDs, where interaction was incremental and organic. But as commercialization took over, new online communities popped up with restricted, binary models of trust. Tech giants started telling us who to trust based on certificates or institutional endorsements, pushing people into a “trust or don’t trust” mindset. But this one-size-fits-all approach isn’t just impersonal. It’s risky. Without the gray area, we’re left with blind trust or total skepticism, with few options inbetween.

Enter Progressive Trust, which seeks to change that by returning choice to the user, letting individuals decide whom to trust and how much of themselves to reveal over time. It’s an effective way to enhance security and protect user agency, fitting seamlessly into decentralized systems like blockchain, where openness and security go hand in hand. Progressive Trust takes the online world back to a more natural process of gradual trust-building, transforming digital trust from a binary affair into something more organic.

The Progressive Trust Life Cycle

Let’s break down the Progressive Trust Life Cycle into its key phases, each step building on the last and adding layers of trust over time. Think of it as a journey from cautious introduction to informed engagement, with each phase providing the groundwork for a stronger, more resilient trust model. These are the steps of progressive trust that are simultaneously automatic in the real world and often ignored in the digital world.

0. Context – Interaction Considered

The foundation of Progressive Trust begins with understanding the Context of an interaction. This sets the stage by establishing the purpose and parameters of the interaction, helping each party assess risk and feasibility. Before any data is exchanged or any commitments are made, each party considers the interaction’s purpose, its goals, its potential benefits, and the risks involved. They also examine the setting in which the interaction takes place, ensuring that they understand the overall environment and any particular conditions that might impact their decision.

Example: A homeowner, Hank, evaluates hiring a contractor for a kitchen remodel. He considers the financial costs, the importance of quality work, and the potential risks of inviting someone into his home for an extended period. The stakes of the scenario are sufficient to prompt Hank to engage in a Progressive Trust model, as opposed to a quick, one-off transaction.

This initial phase helps each side assess whether the potential stakes, such as financial or reputational risk, warrant a full, Progressive Trust approach or if simpler, lower-risk models could suffice.

1. Introduction – Assertions Declared

With the interaction context defined, both parties proceed with Introduction, where they each make initial declarations and claims. By sharing basic information, the parties set the groundwork for further scrutiny, while keeping sensitive details private or hidden (for now).

Example: Hank meets Carla, the cabinet maker, at a social gathering and discusses his interest in remodeling his kitchen. Carla offers her business card and highlights her experience, expressing interest in working with him. This initial interaction is informal yet purposeful, establishing the first connection and introducing each party’s intentions.

This phase is an essential starting point for trust-building, as it allows each party to signal their intentions clearly and publicly, establishing a mutual understanding of what they aim to accomplish. It does not involve extensive trust verification but instead creates a framework of transparency and expectation between the participants.

2. Wholeness – Integrity Assessed

Once an introduction has been established, both parties assess the Wholeness of the information shared. This phase involves evaluating the structural integrity of the data, ensuring that all critical pieces are complete and correctly formatted. Think of this as a quality check: verifying that foundational information is present, well-formed, and free from any immediate signs of corruption or tampering.

Example: Hank checks Carla’s business card, noting that it includes her contractor license number and contact details. Carla, meanwhile, considers whether Hank’s job aligns with her skillset. Both use this phase to make sure the information they have about each other is coherent and free of red flags.

This phase creates the foundation for deeper verification by ensuring that each party’s data contributions are reliable at a surface level. Without verifying structural integrity, any future steps could rest on flawed or incomplete data, leading to potential misunderstandings or risks.

3. Proofs – Secrets Verified

With data integrity confirmed, the next step is Proofs, where parties delve into verifying the sources of the data. It’s a deeper level of validation, establishing the authenticity of the sources for each party’s assertions. That validation leverages modern technology such as digital signatures where possible, to minimize the risk of misrepresentation or fraud.

Example: Hank calls a few of Carla’s previous clients, confirming that they exist, and the testimonials given to him are real. Similarly, Carla may ask for proof of Hank’s readiness to pay by confirming his budget or financial standing.

This phase confirms that both parties’ assertions are backed by a proof, to establish a more secure foundation for the interaction.

4. References – Trust Affirmed

Building on the established proofs, the References phase affirms trust by gathering endorsements, certificates, or additional validation from external sources. This step goes beyond just authenticating the source of any assertions. It’s about gathering the good word from others, including testimonials, reviews, licenses, or certificates. Cryptographic methods may also be used to assure the validity of the references. Parties don’t necessarily gather every reference: they collect until they feel they have enough corroborating information to proceed.

Example: Hank checks Carla’s contractor license in a state registry and reads online reviews. Carla, in turn, verifies Hank’s reputation or credibility within her professional network, gaining confidence that his project is legitimate and that he can be trusted to honor financial commitments.

This phase provides a composite affirmation of the other party’s trustworthiness based on diverse sources, making trust more holistic. It creates a comprehensive picture without oversimplifying the credibility of each party into a binary “yes” or “no.”

5. Requirements – Community Compliance

After personal and third-party validation, the parties consider whether the interaction meets broader Community Standards and Requirements. Here, each party performs an audit to see if the interaction complies with external guidelines, legal standards, or industry norms, which may vary by context. Compliance might involve revealing additional data, following guidelines for quality or safety, or meeting regulatory requirements, which helps each party feel confident that their involvement is appropriate and sanctioned.

Example: Hank ensures that Carla’s contractor license and project quote meet legal requirements and industry standards, such as fair pricing and warranty expectations. Carla might consult her network or a local building authority to verify that Hank’s project is feasible and professionally compliant.

This phase adds another layer of credibility through its confirmation that the interaction aligns with expected practices and requirements.

6. Approval – Risk Calculated

With community compliance confirmed, each party calculates the risk of proceeding and provides a tentative Approval. This step involves a personal assessment, comparing the accumulated trust to any potential risks or liabilities. It’s a decision point where each party considers their own risk model and goals, determining whether the interaction is likely to fulfill their needs without exposing them to undue harm. Approval may involve internal checks or may require formal documentation of agreed-upon terms.

Example: Hank and Carla both review the project’s terms and risk factors, ensuring they feel comfortable with potential liabilities. When ready, they formalize their commitment by signing a contract, each confident that the project aligns with their risk model and is mutually beneficial.

This phase emphasizes that trust isn’t an all-or-nothing concept. It exists on a spectrum, and each party must decide if their level of trust is sufficient to continue.

7. Agreement – Threshold Endorsed (Optional)

In situations of higher stakes or complexity, the Agreement phase may require additional endorsements before proceeding. An Agreement phase is optional but valuable when external input can add layers of confidence, often through the endorsement of peers, family members, or other trusted figures. Threshold endorsements are vital in larger or more sensitive projects, ensuring that all necessary parties or authorities approve before moving forward.

Example: Hank might discuss the project with his family for added assurance, while Carla secures necessary permits from the city. Both parties use these endorsements to reinforce

This phase provides an extra level of validation, helping each party feel more secure in their decision to proceed.

8. Fulfillment – Interaction Finalized

Fulfillment is the phase where each party finally executes their commitments, bringing the project to life based on the trust established through previous steps. Fulfillment requires each party to act according to the rules they’ve set, adhering to any terms, standards, or expectations agreed upon earlier.

Example: Carla completes the kitchen remodel, delivering quality work as per the contract. Hank, in turn, fulfills his financial commitment by making the payment. The project reaches its conclusion, satisfying both parties’ expectations based on their prior trust-building efforts.

This phase represents the culmination of the trust-building process, where both sides honor their agreements and responsibilities. It’s a phase of action rather than evaluation, marking a key transition from planning to execution, after which the interaction is officially complete.

9. Escalation – Independently Inspected (Optional)

In high-stakes or sensitive interactions, the Escalation phase optionally introduces an independent, third-party inspection. This step allows an impartial reviewer to verify that each party’s work or commitments were met, ensuring that the final product aligns with the agreed-upon standards. An inspector may re-evaluate certain phases, especially compliance and fulfillment, confirming that all requirements were followed.

Example: A city inspector reviews Carla’s remodel to ensure it complies with local building codes, giving Hank and Carla final confirmation that the project meets regulatory standards.

This phase helps protect each party, providing an additional level of assurance when risk is high or when the interaction has lasting implications.

10. Dispute – Independently Arbitrated (Optional)

If issues arise, the final, optional phase of Dispute involves resolving conflicts through independent arbitration. In cases where fulfillment does not meet expectations, each party may bring forth additional data or reveal previously concealed information to support their case. An arbitrator then considers the evidence, reviewing both parties’ original commitments, agreements, and standards, to determine a fair resolution.

Example: If a cabinet installed by Carla collapses, Hank may initiate a dispute to assess liability. An independent arbitrator reviews the contract, Carla’s compliance with installation standards, and any relevant inspection reports, ultimately deciding if Carla is responsible for repairs or damages.

This phase safeguards both parties, providing a structured way to resolve disagreements that may impact future interactions or reputations.

The Progressive Trust Life Cycle

Interactions are actually mirrored by both parties, but this diagram simplifies things in most places by focusing on party two.

Beyond Binary Trust: How Progressive Trust Can Transform the Internet

Progressive Trust offers a way to return agency to individuals in a world increasingly dominated by centralized systems. Instead of clicking “OK” on trust agreements handed down by big corporations, users regain control over who they trust and to what degree, over time. Imagine a digital ecosystem where browsers, websites, or social media platforms gradually allowed users to choose what information they revealed and what they kept private, based on their own evolving trust models.

From Gradients to Greatness

The vision for Progressive Trust goes beyond making interactions safer; it’s about bringing digital trust closer to real-world norms. With Progressive Trust, we’re not just building secure systems—we’re creating environments where people can interact meaningfully and sustainably, with digital relationships that grow stronger over time, just like in real life. Whether it’s in journalism, finance, wellness, or personal data sharing, the possibilities are endless when trust is no longer binary.

Progressive Trust is hard, but it’s worth it. It’s a mature model, one that can elevate our digital interactions by letting trust grow naturally. We’ve evolved this process over thousands of years in the physical world; now it’s time to bring the same wisdom to the online world. By embracing Progressive Trust, we’re not just keeping data safe; we’re building a digital space where people can authentically connect and collaborate, one step at a time.

For a more extensive discussion of this Life Cycle, including a look at the vocabulary and several more examples in different domains, see “The Progressive Trust Life Cycle” on the Developer web pages. For more on progressive trust, see my 2004 introduction of the concept and my more recent 2022 musings on the topic.

Tuesday, 12. November 2024

FIDO Alliance

Biometric Update: Mastercard replacement of OTPs with passkeys and Click to Pay reaches APAC

Mastercard is enabling faster and more convenient online transactions with its newest feature, Mastercard Click to Pay, launching in the Asia-pacific region. The result is that consumers will be able to […]

Mastercard is enabling faster and more convenient online transactions with its newest feature, Mastercard Click to Pay, launching in the Asia-pacific region.

The result is that consumers will be able to enjoy one-click checkout across devices, browsers and operating systems, without needing to input one-time passwords (OTPs).

The feature is enabled by the Mastercard Payment Passkey Service, which allows on-device biometric authentication through facial scans or fingerprints, the same way phones are unlocked.


The Record: These major software firms took CISA’s secure-by-design pledge. Here’s how they’re implementing it

The Cybersecurity and Infrastructure Security Agency’s (CISA) secure-by-design pledge has hit its six-month mark, and companies that took the pledge say they’ve made significant security improvements since they signed onto […]

The Cybersecurity and Infrastructure Security Agency’s (CISA) secure-by-design pledge has hit its six-month mark, and companies that took the pledge say they’ve made significant security improvements since they signed onto the initiative.


Security Boulevard: FIDO: Consumers are Adopting Passkeys for Authentication

There appears to be growing momentum behind the use of passkeys as an alternative identity verification tool to passwords, with the familiarity with the technology growing over the past two […]

There appears to be growing momentum behind the use of passkeys as an alternative identity verification tool to passwords, with the familiarity with the technology growing over the past two years while the use of passwords as declined a bit, according to the Fast IDentity Online (FIDO) Alliance.

In its latest Online Authentication Barometer, FIDO found that support for a number of authentication options – including not just passkeys but also biometrics – is growing.

Public awareness of passkeys has jumped from 39% in 2022, when the technology was first introduced, to 57% this year. Meanwhile, the use of passwords in various services sectors is dropping. For example, the percentage of people who used a password over a two-month period for financial services dropped from 51% two years ago to 31% this year.


Retail TouchPoints: The Login Effect: The Role of Customer Authentication Psychology in Retail Success

Retail lags in authentication modernization, but not because providers aren’t interested in upgrading. It’s because customers actively reject change. Familiarity, ease of implementation and legacy system compatibility all mean that […]

Retail lags in authentication modernization, but not because providers aren’t interested in upgrading. It’s because customers actively reject change. Familiarity, ease of implementation and legacy system compatibility all mean that very few retailers offer anything beyond usernames and passwords, not even two-factor (2FA) and multi-factor authentication (MFA).

Ecommerce sites have experimented with magic links, an authentication method that is a little higher friction but is still a viable passwordless alternative. Meanwhile, biometric authentication (think fingerprints and facial recognition) is gaining popularity among less technical users, even if it’s simply to unlock their smartphones. Passkeys, another passwordless authentication method, leverage biometrics or a PIN to let consumers confirm a purchase with just a tap or a quick selfie.


J:COM turns to Passwordless Authentication

Corporate Overview JCOM Co., Ltd. (J:COM) provides a wide range of services to 5.72 million households nationwide, including cable TV (specialty channels, BS, terrestrial digital), high-speed internet connection, smartphones, fixed-line […]

Corporate Overview

JCOM Co., Ltd. (J:COM) provides a wide range of services to 5.72 million households nationwide, including cable TV (specialty channels, BS, terrestrial digital), high-speed internet connection, smartphones, fixed-line phones, electricity, video entertainment, and home IoT.

Under the brand message “Making the new normal,” J:COM actively incorporates digital technology to offer new services that make customers’ lives more comfortable and enriched.

To ensure the safe and comfortable use of the various services provided by J:COM, customers need to register a J:COM Personal ID (phone number or email address), which is linked to multiple services and apps offered by the company. Since August 2019, J:COM has been considering a new J:COM Personal ID, aiming to follow the latest security measures while continuously and swiftly pursuing the convenience of easy ID registration and login, which are often contradictory goals.

Deployment of FIDO2

Previously, in addition to ID/password authentication, J:COM adopted multi-factor authentication by sending one-time passwords to phone numbers.

However, aiming for further convenience, J:COM decided to introduce passwordless authentication using biometric authentication available on customers’ everyday devices (smartphones, tablets).

For the implementation, J:COM used the FIDO-compliant authentication platform “Uni-ID Libra” provided by NRI Secure Technologies, Ltd. (NRI Secure).

Initially, there were challenges in guiding users through the initial setup of FIDO authentication due to differences in operation depending on the OS and browser specifications used by the users, such as fingerprint and facial recognition. However, these issues were resolved by improving screen displays and support site descriptions.

Effects of Implementation

As of August 29, 2024, the number of passkey (FIDO credentials) registrations has reached 16% of the total IDs, and the number of services that can use biometric authentication has reached 25. This implementation has not only improved convenience but also resulted in cost savings on SMS transmission fees, as the cost remained flat despite the increase in the number of users and authentications for the services provided by J:COM.

Passkey (FIDO credentials) registrations has reached 16% of the total IDs

Shiori Takagi from the Agile Development Department, IT Planning Promotion Division, Information Systems Department of JCOM Co., Ltd., commented on this case study:

“With the introduction of FIDO authentication, we believe we have made significant progress towards our goal of enabling customers to log in and use services more securely and easily. We believe that registration will expand further and service usage will be promoted in the future.”

Read the Case Study

Sunday, 10. November 2024

Ceramic Network

Meet our team at FIL Dev, Devcon, and DePIN Day!

We’ve been heads down working hard on solutions for builders at the intersection of data, AI and crypto. But we’re coming up for air to meet other builders in Bangkok this month. Whether you’re a builder, an operator, or just learning about the space,

We’ve been heads down working hard on solutions for builders at the intersection of data, AI and crypto. But we’re coming up for air to meet other builders in Bangkok this month. Whether you’re a builder, an operator, or just learning about the space, we’d love to meet you.

Where to find our crew:

Filecoin’s FIL Dev Summit – Nov 11 Devcon – Nov 12-15 Fluence’s DePIN Day – Nov 15 (RSVP to attend, and catch us at our booth)

Join Proof of Data to be part of an ongoing community

Join our private Telegram group, Proof of Data, for people working on challenges related to the Web3 data ecosystem. This is a collaborative, ongoing space where you can connect with others who are interested in decentralized storage, verifiable data, data availability, identity and reputation, synthetic data, DePIN, and more.

To get the invite link, chat with one of our team members at DePIN day, or DM us on X.

Friday, 08. November 2024

Origin Trail

OT-RFC-21 Collective Neuro-Symbolic AI

“Show me the incentives and I’ll show you the outcome.” Authors: OriginTrail Core Developers Date: November 8th 2024 Since the inception of AI in the 1960s, two main approaches have emerged: neural network-based AI and symbolic AI. Neural networks are statistical systems that generate outputs by detecting patterns in training data, while symbolic AI employs deterministic models with explic
“Show me the incentives and I’ll show you the outcome.”

Authors: OriginTrail Core Developers

Date: November 8th 2024

Since the inception of AI in the 1960s, two main approaches have emerged: neural network-based AI and symbolic AI. Neural networks are statistical systems that generate outputs by detecting patterns in training data, while symbolic AI employs deterministic models with explicit knowledge representations and logical connections. Today, transformers within the Large Language Model (LLM) group dominate neural networks, while knowledge graphs are the leading technology in symbolic AI for representing structured knowledge.

Used alone, each approach has limitations. Neural networks are probabilistic and can produce unwanted outputs (hallucinations), risk intellectual property issues, exhibit biases, and face model collapse with a growing amount of AI-generated (training) data online. Symbolic AI, meanwhile, is constrained by its rule-based reasoning, limiting creativity and user experience. Hybrid neuro-symbolic systems combine the strengths of both, leveraging neural networks’ usability and creativity while grounding them in knowledge graphs. This approach can enhance reliability, mitigate biases, ensure information provenance, and promote data ownership over IP risks.

OriginTrail Decentralized Knowledge Graph (DKG), together with NeuroWeb (the AI — tailored blockchain) is surfacing as one of the key components of the symbolic AI branch, enhancing knowledge graph capabilities with the trust of blockchain technology, and powering, Collective Neuro-Symbolic AI.

This RFC addresses the following key development milestones to further enhance the Collective Neuro-Symbolic AI and will serve as a basis for one of the most extensive roadmap updates to date:

DKG V8 Testnet results and learnings, DKG Core and Edge Nodes Economics, Collective Programmatic Treasury (CPT), DKG V8 Mainnet launch in December

After reading the following OT-RFC-21, you may leave your comment here: https://github.com/OriginTrail/OT-RFC-repository/issues/47

“Show me the incentives, and I’ll show you the outcome.”

The quote by Charlie Munger speaks to the importance of setting the right incentives in any system. As the DKG network matures in scalability and adoption, the incentives can become more refined in their implementations and more aligned with supporting the key metric — growth of usage of the DKG network.

There are multiple roles in the OriginTrail ecosystem that are incentivized with both TRAC and NEURO. TRAC is incentivizing Core node operators and TRAC delegators while NEURO incentivizes Neuroweb blockchain (Collator) node operators, NEURO delegators, and knowledge publishers (henceforth best represented by DKG EDGE node operators) for incentivized paranets.

The establishment of Collective Programmatic Treasury (detailed in a dedicated section below) will give the most active DKG paranets, by volume of new knowledge assets published to the DKG, an opportunity to take part in building the future of the technology.

The incentives updates and novelties will be released as a part of the DKG V8 mainnet release.

DKG V8 testnet results and learnings

In the first 5 weeks since the DKG V8 Testnet launch, the community has deployed over 500 V8 core nodes, which as part of the incentive program submitted over 3.7 terabytes and 13.7B lines of core node operational logs, and over 8 million Knowledge Assets published. These have proven very valuable inputs for the core developers who have introduced several optimizations to the DKG based on the submitted telemetry, including performance boosts on the new paranet syncing features, testing curated paranets, and other performance updates.

Chart of log lines submitted by V8 Core Nodes telemetry

The number of nodes on the V8 testnet highlights another key insight: even with a fixed reward budget of 100k TRAC, which was allocated to test the behavior of V8 Testnet Core Nodes, achieving an economically viable node count requires the full implementation of the DKG delegated staking feature. Delegated TRAC acts as a market mechanism to balance the node count according to the rewards available in the network at any time. This underscores the critical role TRAC delegators will play in maintaining stability and economic balance within the V8 DKG ecosystem.

As the initial phase of the V8 testnet wraps up, advancing V8 features and validating them requires an environment where all economic incentives are active to support the full deployment of the DKG V8. Key V8 components, such as the Edge Node and Core Node, will now continue to be deployed and optimized on the V6 mainnet, with the V8.0 mainnet launch set for December this year. This launch will initiate the Tuning Period, during which V8 will gain enhanced performance with features like Batch Minting, Random Sampling, and a new staking interface, all backed by real economic incentives.

In addition, synergistic effects between publishers (represented by DKG Edge Nodes, once the V8 network is deployed) and Core DKG Nodes will be fostered through horizontal scaling. This approach aims to refine network signaling, enabling an optimal network size by aligning the number of nodes more precisely with network demands.

The details in the following chapters of this RFC create a level playing field to prepare for updates on existing incentives on the DKG Core node and access to Collective Programmatic Treasury (CPT).

DKG Core and Edge Nodes Economics

The DKG V8 has been designed with major scalability improvements at multiple levels, with a prototyped implementation tested in collaboration with OriginTrail ecosystem partners from data-intensive sectors.

The major advancement that DKG V8 is making is in expanding the OriginTrail ecosystem’s product suite to two key products:

DKG Core Node V8 — highly scalable network nodes forming the network core, persisting the public replicated DKG DKG Edge Node V8 — user-friendly node applications tailored to edge devices (phones, laptops, cloud, etc)*

*The expansion to more devices is intended to be based on ecosystem builders’ capacity and market needs.

Internet scale with DKG Edge nodes

Edge nodes enable the DKG to reach every part of the internet we know today — any device, any user, any chain. Being a light-weight version of the DKG node, Edge nodes can support both accessing the private and public knowledge on the DKG as well as publishing new knowledge.

Having this capability, DKG Edge node is a very useful tool:

for paranet operators to enable knowledge miners to publish new knowledge onto their paranets; for solution builders as a flexible interface for their neuro-symbolic AI products that can access both private and public parts of the DKG; for DKG Edge node operators that want to start publishing to the DKG so they could transform their DKG Edge node into a DKG Core node.

The continuation of V8 development focuses on teams looking to deploy their paranets & Edge nodes on DKG Mainnet to generate substantial usage. Therefore, the DKG Edge Node Inception Program budget of 750k TRAC is dedicated to builders launching paranets on both the V6 and V8 mainnet, with up to 100k TRAC per builder available as reimbursement for TRAC used for publishing to a particular paranet.

More details on how you can apply for the DKG Edge Node Inception Program can be found here.

Horizontal scaling with DKG Core nodes

The backbone of the DKG network in V8 is formed of DKG Core nodes, whose purpose is to ensure secure hosting of the public DKG and facilitate network communication in a decentralized fashion. DKG Core nodes are incentivized through competing for DKG publishing fees in TRAC tokens, which are distributed among the best performing nodes in the network.

The success of a Core node in capturing fees in DKG V6 is currently a function of 3 factors: (1) node uptime and availability, (2) total TRAC stake delegated to a node, and (3) network hash distance (enabling efficient knowledge content addressing).
Several learnings have been acquired in V6 through the period of the system running in production, most notably on how to improve scalability and further fine-tune the incentive system for DKG growth, by updating the relevant parameters in the tokenomics formula.

Particularly, the community of node operators has been indicating the hash distance factor as the most problematic one, causing randomization and impacting the system in an unpredictable and asymmetric way (the nodes with the same amount of stake and uptime could perform differently in terms of rewards due to a different hash ring network position).

On the other hand, the builders’ feedback is that the friction to contributing to the DKG needs to be significantly lower, specifically in terms of publishing price per knowledge asset (addressed with scalability) and accessibility to publishing through available nodes, expressing the need for an approach similar to blockchain RPC services, which allow sending transactions to the blockchain without running a blockchain node.

Therefore V8 introduces an updated Core node incentive system with the following factors:

Node uptime & availability, in positive correlation, as nodes need to prove their commitment of hosting the DKG by submitting proofs to the blockchain (through the new V8 random sampling proof system), TRAC Stake security factor, in positive correlation — the more stake a node attracts, the higher the security guarantees and therefore the higher chance of rewards (same as in V6), Publishing factor, in positive correlation — the more new knowledge has been published via a specific core node (measured in TRAC tokens), the higher the chance of rewards, Node fee (formerly “ask”), in negative correlation — the nodes with lower fees are positively impacting the system scalability, and therefore have a higher chance of rewards.

The illustrative incentive formula is therefore:

where the specific functions are to be validated on both the testnet (for technical functionality) and mainnet (for market functionality) during the V8 Tuning period.

This addition creates further alignment of Core nodes with the ecosystem growth as Core nodes that take up roles of driving adoption will become more successful. Importantly, it also creates an aligned horizontal scaling approach, since additional Core nodes in the DKG become required with growing adoption. This creates a positive self-reinforcing feedback loop: new adoption leads to new nodes, which leads to increased scale, which unlocks further adoption. We can imagine core nodes almost acting as a “solar panel” that allows publishers to capture TRAC fees from the network so they could use it for their publishing needs.

Network security via staking

TRAC delegators are using their TRAC to secure the DKG network by delegating it to selected Core nodes. In exchange for a delegation (and increasing the core node’s chance of capturing rewards), the node operator splits a part of the captured rewards with the delegators. When selecting the core node to support, the delegators take all the key elements of a successful Core node into account which will, from DKG V8 onwards, include the amount of knowledge added to the DKG.

NEW: Those who use it, will build it: 60MM TRAC Collective Programmatic Treasury (CPT)

To achieve that those who use the network have incentives to build it in the future, the future development fund will be deployed as a 60MM TRAC Collective Programmatic Treasury (CPT). The Collective Programmatic Treasury will be implemented with a programmatic release schedule emitting TRAC to eligible builders. The release schedule will follow the most famous example of emissions in the cryptocurrency space, that of the Bitcoin halving with minor alterations. The TRAC released from Collective Programmatic Treasury will be dedicated to (both conditions should be fulfilled) those who:

use TRAC tokens for publishing knowledge (paranets spending the most TRAC for publishing knowledge), AND have been confirmed eligible for incentives by the community (paranets who have completed successful IPOs and are deployed on NeuroWeb). The schedule

As mentioned above, the schedule draws inspiration from likely the most influential schedule process in Crypto, Bitcoin halvings. The halvings principle dictates that half of the outstanding amount is to be distributed in each following period in equal amounts throughout that period. While BTC halvings are set at 4 years, our schedule proposal is to set this period for 2 years in the case of TRAC. That said, the emissions schedule would be as follows:

The Collective Programmatic Treasury will be deployed on the NeuroWebAI blockchain and will allow paranet operators to trigger Collect reward transactions which will calculate the amount of rewards they are eligible for and pay it out accordingly.

The distribution

The distribution amounts will be tied to the core principle of “Those who use it, will build it”. The metric which will, therefore, define the amount of TRAC that a builder (represented by their paranet) will receive, is tied to their TRAC spending for creating knowledge on the DKG. A simple example would be as follows:

Paranet A spent 1,000 TRAC Paranet B spent 2,000 TRAC Paranet C spent 3,000 TRAC

Collective Programmatic Treasury amount for the period: 600 TRAC

Paranet A: 100 TRAC reward Paranet B: 200 TRAC reward Paranet C: 300 TRAC reward

*all numbers are placeholders, just exemplifying the relationship between the spent and received amounts..

The Collective Programmatic Treasury will be observing DKG network usage on the innovation hub of OriginTrail ecosystem, the NeuroWebAI blockchain, thus applying only to NeuroWebAI hosted paranets.

The eligibility & humans in the loop

Not every paranet on NeuroWebAI is by default eligible for the TRAC dev fund emissions. In order to achieve that status, a paranet must have been voted in via the IPO process, gaining support by the NeuroWebAI community through a NEURO on-chain governance vote. In this way, the community collectively decides on the dev fund & NEURO incentive emissions, transparently implementing the “humans in the loop” system via on-chain governance.

The Collective Programmatic Treasury (CPT) is expected to be implemented in March 2025.

DKG V8 release timeline

November

DKG V8 testnet layer 1 completed OT-RFC-21 release DKG V8 Edge node Inception program start

December

DKG V8.0 Mainnet and Tuning period launch Neuroweb collator staking

January 2025

DKG V8.1, Tuning period ends

February 2025

Neuroweb TRAC Bridge made available

March 2025

DKG V8.2 release — Collective Programmatic Treasury

OT-RFC-21 Collective Neuro-Symbolic AI was originally published in OriginTrail on Medium, where people are continuing the conversation by highlighting and responding to this story.


Digital Identity NZ

DINZ Welcomes the Publication of the Digital Identity Services Trust Framework Rules

Digital Identity New Zealand (DINZ) is pleased to see the Government’s latest steps toward establishing safe and secure digital identity services, with the official publication of the Digital Identity Services Trust Framework (DISTF) Rules, effective today. This marks a significant milestone for digital identity in New Zealand, where DINZ has contributed its experienced advice from … Continue read

Digital Identity New Zealand (DINZ) is pleased to see the Government’s latest steps toward establishing safe and secure digital identity services, with the official publication of the Digital Identity Services Trust Framework (DISTF) Rules, effective today. This marks a significant milestone for digital identity in New Zealand, where DINZ has contributed its experienced advice from the outset.

The Complexity of Digital Identity
Building a robust yet adoptable digital identity framework is complex and demands ongoing collaboration and input. DINZ has been a part of this journey from the beginning, providing our expertise to help shape a framework that can work for New Zealand’s diverse communities and business needs.

Support for Our Members
As we pivot our DISTF Working Group’s focus from policy submissions to supporting members in adoption, DINZ is committed to helping organisations navigate their accreditation journey. For those engaging with the Trust Framework Authority, DINZ offers a range of resources to support their adoption and accreditation, including DISTF education and awareness sessions through our sustained and constructive engagement with the Department of Internal Affairs (DIA).  Additionally, through our partnership with InformDI (sponsored by DINZ member NEC) has made its online educational resource available and provided at no cost to DINZ members through March.

Open for Business – Together
The publication of the Trust Framework Rules enables the Trust Framework Authority to be “open for business,” and DINZ stands ready to support members as they work towards providing secure and trusted digital identity solutions. Together, we are building a safe, secure digital identity ecosystem that supports both privacy and innovation in New Zealand.

Learn More
To find out more about our mahi, our DISTF Working Group, sample our DISTF awareness sessions, or member access to our educational resources, visit our website.

The post DINZ Welcomes the Publication of the Digital Identity Services Trust Framework Rules appeared first on Digital Identity New Zealand.

Thursday, 07. November 2024

FIDO Alliance

TechRadar: Youth of today say passwords are old news; passkeys are the future

Younger generations see passwords as outdated and are opting for passkeys, a FIDO-backed technology offering more secure, passwordless authentication. With increasing support from popular apps and services, young users are […]

Younger generations see passwords as outdated and are opting for passkeys, a FIDO-backed technology offering more secure, passwordless authentication. With increasing support from popular apps and services, young users are helping to drive this transition towards safer, FIDO-endorsed security solutions.

“Consumer expectations are changing, and this data should serve as a clear call to action for brands and organizations still relying on outdated password systems,” noted Andrew Shikiar, CEO at FIDO Alliance.

“Consumers are actively seeking out and prefer passwordless alternatives when available, and brands that fail to adapt are losing patience, money, and loyalty – especially among younger generations.”

“When consumers know about passkeys, they use them. Excitingly, 20% of the world’s top 100 websites and services already support passkeys. As the industry accelerates its efforts toward education and making deployment as simple as possible, we urge more brands to work with us to make passkeys available for consumers. The pace of passkey deployment and usage is set to accelerate even more in the next twelve months, and we are eager to help brands and consumers alike make the shift,” Shikiar concluded.


ZDNET: Passkeys are more popular than ever. This research explains why

The FIDO Alliance’s fourth annual Online Authentication Barometer reveals significant growth in awareness and adoption of passkeys, with 57% of surveyed consumers now familiar with the technology (up from 39% […]

The FIDO Alliance’s fourth annual Online Authentication Barometer reveals significant growth in awareness and adoption of passkeys, with 57% of surveyed consumers now familiar with the technology (up from 39% in 2022). As awareness increases, FIDO is urging more brands to adopt passkey support to help combat the rising sophistication of online threats and scams.


Velocity Network

Roundtable on Verifiable Credentials: Trust and Truth in an AI-enabled Talent Acquisition Mark

This week, Etan Bernstein and three Board Members of the Velocity Network Foundation, Sid Bhattacharya of SAP, Glen Cathey of Randstad and Jean-Marc Laouchez of Korn Ferry, recorded a virtual roundtable on how Verifiable Credentials can mitigate and even overcome the most serious challenges posed by AI in this space. The post Roundtable on Verifiable Credentials: Trust and Truth in an AI-enable

Wednesday, 06. November 2024

Next Level Supply Chain Podcast with GS1

Harnessing AI for Smarter, Faster Supply Chains with Steve Hochman

With disruptions becoming more frequent, companies must adapt or risk falling behind. To stay ahead, many are embracing new technology, with AI emerging as a powerful tool for enhancing supply chain agility and resilience. In this episode, Steve Hochman, VP of Research at Zero100, joins hosts Reid Jackson and Liz Sertl to talk about the key trends shaping the future of supply chains. He h

With disruptions becoming more frequent, companies must adapt or risk falling behind.

To stay ahead, many are embracing new technology, with AI emerging as a powerful tool for enhancing supply chain agility and resilience.

In this episode, Steve Hochman, VP of Research at Zero100, joins hosts Reid Jackson and Liz Sertl to talk about the key trends shaping the future of supply chains. He highlights the need for organizations to adapt by improving cross-functional collaboration and leveraging artificial intelligence.

In today’s rapidly changing global environment, organizations must focus on their people, processes, and technology to build lasting supply chain resilience.

 

In this episode, you’ll learn:

Effective ways to leverage AI for automating supply chain operations The importance of cross-collaboration for a more integrated and responsive system How to implement small-scale AI experiments for meaningful impact

 

Jump into the conversation:

(00:00) Introducing Next Level Supply Chain

(03:10) The rise of supply chain volatility

(08:12) Cross-functional collaboration in supply chains

(15:35) Innovation through AI experiments

(17:48) Case study: Shein’s use of AI for e-commerce

(21:07) The importance of data management

(25:40) Considering the ethical implications of AI

(31:16) Future trends of AI in supply chains

(32:39) Steve Hochman’s favorite tech

 

Connect with GS1 US:

Our website - www.gs1us.org

GS1 US on LinkedIn

 

Connect with the guest:

Steve Hochman on LinkedIn

Check out Zero100

Friday, 01. November 2024

Energy Web

Energy Web Insights: AutoGreenCharge, explained

Since launching AutoGreenCharge, an app aimed at reducing carbon emissions from electric vehicles (EVs), many early users and corporate partners have asked us to explain how it works — especially how it uses Energy Attribute Certificates (EACs). Today, we’ll break down what AutoGreenCharge does and how it helps EV drivers, fleet owners, and anyone interested in decarbonizing EV charging and the gr

Since launching AutoGreenCharge, an app aimed at reducing carbon emissions from electric vehicles (EVs), many early users and corporate partners have asked us to explain how it works — especially how it uses Energy Attribute Certificates (EACs). Today, we’ll break down what AutoGreenCharge does and how it helps EV drivers, fleet owners, and anyone interested in decarbonizing EV charging and the grid as a whole.

Understanding Energy Attribute Certificates (EACs)

First, let’s talk about electricity markets and EACs. These certificates are used to track and trade the environmental benefits of renewable energy. EACs represent the “green” qualities of renewable electricity, such as how it was produced (wind, solar, etc.), where it was generated, and when it was created. For every 1 megawatt-hour (MWh) of renewable electricity, one EAC is issued. These certificates can be sold separately from the actual electricity, so people or businesses can support renewable energy without being directly connected to a renewable power plant.

​​When someone wants to claim they’ve used renewable electricity, they “retire” the certificate in a registry, ensuring that only one person or business can take credit for that specific green energy. Governments and companies worldwide have been using EACs for years to help make their energy use cleaner.

Different regions have their own versions of EACs:

Europe uses Guarantees of Origin (GOs), which prove electricity came from renewable sources. North America uses Renewable Energy Certificates (RECs), similar to GOs, but often certified by a trusted third party. Other countries, like Australia and Japan, have their own systems that work in a similar way. International Renewable Energy Certificates (I-RECs) are used in many emerging economies and track renewable energy across borders.

EACs can be bought and sold on various platforms, and once purchased, they can be retired to claim a decrease in emissions.

How AutoGreenCharge Works

AutoGreenCharge simplifies the process of matching EACs to EV charging sessions. It takes into account where and when you charge your car, how much electricity you use, and then automatically selects and purchases EACs to offset any non-renewable energy in that session. Since most EV charging sessions use less electricity than a full MWh (the standard size of an EAC), AutoGreenCharge splits the certificates into smaller pieces to match your charge. This process is verified by Energy Web’s Worker Node network, which ensures everything is tracked accurately.

AutoGreenCharge also has a new feature for enterprise: “Bring Your Own EAC.” This allows companies to use EACs they’ve already purchased. If they don’t want to handle it themselves, the app can take care of buying and matching the right certificates for them. For example, the first certificates purchased by Energy Web for the app were high-quality wind RECs from the U.S.

Addressing Common Questions

One AutoGreenCharge tester asked the following: “How do you know I am not charging my vehicle using a diesel generator?”

First, we should note that while it is possible to run a diesel generator to charge a vehicle, it’s a fairly rare occurrence as it doesn’t make economic sense. Charging an EV with a diesel generator is expensive, and buying EACs to greenwash it would be even more costly. However, we can consider this scenario as an example to demonstrate how AutoGreenCharge works. If the app doesn’t have any further information about the physical grid connection of the charging station in use, it will assume that none of the electricity used was renewable. So in this scenario, an EAC would be matched to the full charging session volume to make sure it’s covered by 100% renewables.

EACs give control to each individual and organization to contribute to the success of renewable facilities by attracting more investments into a greener grid which will replace emitting electricity generators over time. AutoGreenCharge makes every charge event supportive of renewable energy — at least somewhere in your EAC region.

Limitations of EACs Today

While EACs are great for individual action supporting renewables, they have some limitations. For example, they usually don’t specify the exact time of electricity generation — just the year. They also don’t always account for physical grid limitations, meaning you could buy an EAC from another region even if it doesn’t match the electricity connection where you’re located.

Still, EACs are currently the most widely used system for tracking renewable energy. Efforts are already underway to make EACs more detailed. AutoGreenCharge supports several types of emerging protocols, like Trusted Green Charging and DIVE, which provide a more accurate environmental impact for specific charging events.

We’re excited about the future of AutoGreenCharge and welcome feedback! If you’re interested in learning more or want to share ideas, install the app for Android and iOS now or reach out to us.

Energy Web Insights: AutoGreenCharge, explained was originally published in Energy Web on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wednesday, 30. October 2024

FIDO Alliance

HiTRUST Brings Passkeys to Colatour Travel

Imagine booking your dream vacation with just a single touch or a smile, without worrying about forgotten passwords or hackers. This seamless experience is now possible thanks to HiTRUST’s latest […]
Imagine booking your dream vacation with just a single touch or a smile, without worrying about forgotten passwords or hackers.

This seamless experience is now possible thanks to HiTRUST’s latest collaboration with Taiwan’s leading travel platform, Colatour. Building on nearly a decade of trusted partnership, HiTRUST and Colatour have launched an innovative passwordless solution. Powered by global FIDO standards, it redefines the security of digital travel booking platforms.

Passkey Authentication On Colatour

Nowadays, in a fast-paced digital world, where real-time interactions and personalized travel experiences are a must; it’s essential for businesses to provide secure and user-friendly customer journeys. As cyber threats escalate, targeting personal and financial data, HiTRUST is leveraging the FIDO Alliance’s global standard for passwordless authentication, backed by industry giants like Apple, Google, and Microsoft.

Colatour users can now bid goodbye to passwords. HiTRUST’s FIDO-based solution replaces them with a more secure alternative: biometrics. Whether it’s a fingerprint or facial recognition, users can authenticate instantly without passwords into Colatour’s online platform. On the web version, this method is compatible with all major browsers, making it easy for users to access.

Supported by the FIDO Alliance and technology leaders like Apple, Google, and Microsoft, Passkeys transform online credential management by synchronizing devices within the same ecosystem, removing the need to re-register when upgrading or switching between devices. This ensures a simple, secure, and convenient user experience.

Registration Process

Passwordless Login Process

Mitigating Cyber Threats on Tourism Platforms

With HiTRUST’s passwordless authentication, Colatour’s users can enjoy a stress-free experience—no more complex passwords to remember or fear of account theft through phishing attacks. Instead, users authenticate securely using their unique individual biometrics, ensuring peace of mind across all devices.

For Colatour, FIDO secures customer accounts by preventing hacks and data leaks. With biometric authentication, it blocks fraudsters, lowers fraud risks, and builds stronger customer trust and safety.

On the other hand, Colatour users benefit from this advanced approach by replacing passwords with biometric authentication, providing a secure login and seamless experience. Users can easily log in to the website or app using facial recognition or fingerprint authentication, eliminating the hassle of entering account details while enhancing security. This creates a fast and safe digital tool for travelers, ensuring personal data and travel itineraries are protected from hackers and fraud.

Gaining a First-Mover Advantage with Passwordless Technology

Our partnership sets a new standard for secure, seamless user experiences in the travel industry. As more sectors adopt this innovative approach, Colatour leads the way. Not only can B2C members benefit from FIDO, but Colatour also offers B2B members access to biometric authentication on their website and app. Clients can easily log in with facial recognition or fingerprint authentication, ensuring a safer, worry-free travel experience and boosting customer engagement. By implementing advanced security measures like passwordless authentication, Colatour not only protects customers from potential fraud but also strengthens trust and loyalty. HiTRUST remains committed to delivering cutting-edge solutions, safeguarding Colatour and its travelers, and paving the way for a secure future in the travel industry.

About Colatour Travel Service CO., LTD.

Founded in 1978, Colatour Travel Service CO., LTD. is Taiwan’s largest travel agency in terms of group tours and a leading brand in the travel industry. With over 1,400 employees, Colatour operates one of the highest-traffic B2C websites and numerous physical stores. It is also the largest wholesale travel company in Taiwan. Over the past 40 years, Colatour has served more than 10 million outbound group travelers and issued hundreds of millions of airline tickets, earning numerous awards as a top partner from airlines, resorts, and hotels. The ColatourGroup includes Colatour Travel, Comfort Travel Service, and Polaris Travel Service.’

Discover more about how HiTRUST and Colatour are transforming the future of travel security:
TTN Media Article | 搶攻會員經濟可樂旅遊全新「可樂幣」回饋上線!

Read the Case Study

New Data Finds Brands are Losing Younger Customers Due to Password Pain, as Passkeys Gain Mainstream Momentum

Global FIDO Alliance study reveals latest consumer trends and attitudes towards authentication methods and their perceived online security 29 October 2024 – The FIDO Alliance today publishes its fourth annual […]

Global FIDO Alliance study reveals latest consumer trends and attitudes towards authentication methods and their perceived online security

Passkey familiarity growing – Just two years after passkeys were first announced and started to be made available for consumer use, awareness has risen by 50%, from 39% familiar in 2022 to 57% in 2024 Password usage stagnates as consumers favor alternatives – The majority of those familiar with passkeys are enabling the technology to sign in. Meanwhile, despite passwords remaining the most common way to log in, the number of people using passwords across use cases declined as alternatives continue to rise in availability Waning password patience is costing sales and loyalty, especially among younger consumers –  42% of people have abandoned a purchase at least once in the past month because they could not remember their password.​ This increases to 50% for those aged 25-34 versus just 17% of over 65s Online scams and AI alarming consumers – Over half of consumers reported an increase in the number of suspicious messages they notice and an increase in scam sophistication, driven by AI. Younger generations are even more likely to agree, while older generations remain unsure how AI impacts their online security

29 October 2024 – The FIDO Alliance today publishes its fourth annual Online Authentication Barometer, which gathers insights into the state of online authentication and consumer perceptions of online security in ten countries across the globe. 

Key findings 

The research revealed promising consumer momentum building around passkey adoption and clear signs people are recognizing the limitations of passwords and are choosing passwordless alternatives, like passkeys, where available. In the two years since passkeys were first announced, global awareness has jumped by 50%, rising from 39% familiar in 2022 to 57% in 2024. Awareness is driving adoption too – the majority of those familiar with passkeys (62%) are using them to secure their apps and online accounts.

The data also revealed the cost to organizations still relying on legacy password sign-ins – especially among younger generations. 42% abandoned a purchase in the last month due to a forgotten password, rising to over half of those under 35. Similarly, over half of consumers (56%) have given up accessing a service online because they couldn’t remember a password in the last month, rising to 66% of those under 35. 

The survey revealed other clear signs that password usage and trust are stagnating globally as more secure and user-friendly passwordless alternatives become available. Overall, the number of consumers entering a password manually across use cases decreased again from 2023, while biometrics ranked the authentication method consumers consider the best login experience and the method they consider most secure for the second year running. 

When consumers were asked about how they have improved account security in the last year, numbers continued to decline among those who increased the complexity of a password, while those choosing biometrics and using authenticator apps have steadily risen. 

Passkeys at two: the road to mainstream 

“Consumer expectations are changing, and this data should serve as a clear call to action for brands and organizations still relying on outdated password systems. Consumers are actively seeking out and prefer passwordless alternatives when available, and brands that fail to adapt are losing patience, money, and loyalty – especially among younger generations. 

When consumers know about passkeys, they use them. Excitingly, 20% of the world’s top 100 websites and services already support passkeys. As the industry accelerates its efforts toward education and making deployment as simple as possible, we urge more brands to work with us to make passkeys available for consumers. The pace of passkey deployment and usage is set to accelerate even more in the next twelve months, and we are eager to help brands and consumers alike make the shift,” comments Andrew Shikiar, CEO at FIDO Alliance. 

Notably, passkeys have seen strong adoption in high-growth, digitally advanced markets like China and India, which ranked top globally with 80% and 73% enablement, respectively. The UK followed close behind in third place, with adoption levels at 66%. 

Younger consumers most attuned to online scams and AI threats 

Consumer concerns about online security were also revealed to be high – and again, it is younger consumers most attuned to new threats. 

Over half of consumers (53%) cited an increase in the number of suspicious messages they noticed in recent months, driven mostly by SMS (53%) and email (49%). Similarly, 51% detected an increase in the sophistication of threats and scam messages, likely driven by AI-enhanced attacks. Zooming in on demographic data suggests older generations are at greatest risk: 54% and 61% of 18-24 and 25-34-year-olds respectively noticed scams getting smarter, while just a third of 55-64-year-olds and 25% of 65+ said the same. Similarly, 20% of people over 55 said they were unsure about the impact AI has on their online security. 

ENDS 

Notes to editors 

Research for the FIDO Alliance’s Online Authentication Barometer was conducted by Sapio Research among 10,000 consumers across the UK, France, Germany, US, Australia, Singapore, Japan, South Korea, India, and China. 

About FIDO Alliance 

The FIDO (Fast IDentity Online) Alliance, www.fidoalliance.org, was formed in July 2012 to address the lack of interoperability among strong authentication technologies, and remedy the problems users face with creating and remembering multiple usernames and passwords. The FIDO Alliance is changing the nature of authentication with standards for simpler, stronger authentication that define an open, scalable, interoperable set of mechanisms that reduce reliance on passwords. FIDO Authentication is stronger, private, and easier to use when authenticating to online services. 

Contact
press@fidoalliance.org