Last Update 6:18 PM April 29, 2024 (UTC)

Organizations | Identosphere Blogcatcher

Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!!

Monday, 29. April 2024

DIF Blog

Welcoming Ankur Banerjee to the Helm of DIF TSC

The past year at the Decentralized Identity Foundation (DIF) has been a period of remarkable progress. We have made significant strides in advancing the interests of the decentralized identity community, focusing on research and development to establish interoperable global standards. As we continue our journey to enable a world where

The past year at the Decentralized Identity Foundation (DIF) has been a period of remarkable progress. We have made significant strides in advancing the interests of the decentralized identity community, focusing on research and development to establish interoperable global standards. As we continue our journey to enable a world where decentralized identity solutions empower entities to gain control over their identities, we are delighted to share an exciting update.

Today, we are thrilled to announce that Ankur Banerjee has been elected as the new co-chair of the DIF Technical Steering Committee (TSC). Ankur plays an increasingly critical role at DIF, and last year he was elected to the DIF Steering Committee. He joins current co-chair Andor Kesselman, and together they are bound to drive DIF's mission forward with renewed vigor and vision. 

Introducing Ankur Banerjee

With over a decade of corporate experience and over 3 years of startup co-founding, Ankur is now a serial entrepreneur and decentralized identity advisor.

Ankur Banerjee is the co-founder and CTO at cheqd, a privacy-preserving decentralized identity network that allows users and organizations to take full control of their data. He oversees the product and engineering department to ensure their diverse suite of identity and reputation products are built to a high standard to meet the needs of organizations and individual users. Moreover, he is a frequent speaker at blockchain and identity conferences including the Internet Identity Workshop, European Identity & Cloud Conference, Nebular Summit, and many others. Prior to cheqd, Ankur worked at R3, an enterprise blockchain company; IDWorks, a digital identity startup; and as a founding member of global consultancy Accenture’s innovation lab.

Ankur is a co-inventor on multiple patents in the realm of cloud-based artificial intelligence services and blockchain, including the Hyperledger blockchain automation framework. His knowledge extends into distributed tech architecture, biometrics, and digital identity. These areas are crucial as DIF navigates the complexities of modern digital identities and looks towards a future where these technologies play a pivotal role in everyday interactions.

On a personal note, Ankur is an ardent advocate for mental health, mentoring, and inclusion. His passion for these causes aligns perfectly with DIF's values of promoting a more inclusive and accessible digital world. Ankur's commitment to these principles is not just admirable but essential as we aim to foster an environment where technology serves everyone equitably.

What this Means for DIF

At DIF, we recognize the importance of strong leadership in achieving our goals. As the co-chair of the TSC, Ankur will play a critical role in steering the committee’s efforts, overseeing important initiatives, and ensuring that our projects align with our strategic objectives. 

“We are extremely fortunate to have Ankur join us in a TSC leadership role," said Kim Hamilton Duffy, DIF’s Executive Director. "As decentralized identity technology continues to mature, it finds broader adoption across diverse markets and use cases, making the need for interoperability and compliance with emerging regulations more critical than ever. Ankur’s deep expertise across both consumer and enterprise spheres aligns perfectly with our strategic goals. His leadership will be crucial in navigating these evolving landscapes and ensuring that DIF remains at the forefront of technological innovation and standards development.”

"I am deeply honored to take on the role of co-chair of the DIF Technical Steering Committee. Decentralized identity technology is at an inflection point for adoption, where the basic building blocks of the standards are built and settled. I look forward to supporting DIF member organizations in accelerating their industry-wide efforts to make it easier for developers to build usable, privacy-preserving experiences that give users control of their data,” said Ankur Banerjee.

Ankur brings unlimited energy and passion, ready to roll up his sleeves and work with us on the ground and help define long-term strategy. His leadership will be instrumental in guiding the committee to achieve new milestones and pave the way for innovative solutions in the decentralized identity space.

We encourage all our readers and members to extend their warmest welcome to Ankur. We are excited to see where his leadership will take us and how his innovative thinking and commitment to technology and society will help shape the future of decentralized identity.

For more updates on our projects and initiatives, stay tuned to our blog and upcoming events.


FIDO Alliance

Verdict: OneSpan: Partner Ecosystem Profile

The company’s various solutions include regulatory compliance, PSD2 compliance, FIDO standard, fraud prevention, mobile app security, transaction signing, digital onboarding and omnichannel security solutions. It operates in North America, Europe […]

The company’s various solutions include regulatory compliance, PSD2 compliance, FIDO standard, fraud prevention, mobile app security, transaction signing, digital onboarding and omnichannel security solutions. It operates in North America, Europe and the Asia Pacific regions.


Tech telegraph: WhatsApp now rolling out passkey support for iPhone users

Passkey is a technology developed by the FIDO Alliance in collaboration with major companies like Apple, Google, and Microsoft. Instead of traditional passwords, it enables users to log in using […]

Passkey is a technology developed by the FIDO Alliance in collaboration with major companies like Apple, Google, and Microsoft. Instead of traditional passwords, it enables users to log in using secure methods like facial recognition or biometrics, eliminating the need to create and type a passcode.


Biometric Update: NIST issues guidance to fit passkeys into digital identity recommendations

Andrew Shikiar, CEO of the FIDO Alliance, noted that the updated NIST guidance confirms passkeys’ ability, along with other FIDO authenticators, to meet AAL2 and AAL3 requirements. Synchronized passkeys can […]

Andrew Shikiar, CEO of the FIDO Alliance, noted that the updated NIST guidance confirms passkeys’ ability, along with other FIDO authenticators, to meet AAL2 and AAL3 requirements. Synchronized passkeys can achieve AAL2, while device-bound passkeys can reach AAL3.


TechCrunch: WhatsApp adds global support for passkeys on iOS

WhatsApp is launching passkey verification on iOS, eliminating the requirement for users to manage SMS one-time passcodes. The company announced on Wednesday that this feature is currently being rolled out […]

WhatsApp is launching passkey verification on iOS, eliminating the requirement for users to manage SMS one-time passcodes. The company announced on Wednesday that this feature is currently being rolled out and will soon be accessible to all iOS users.


Origin Trail

PolkaBotAI — decentralizing AI with OriginTrail and Polkadot

PolkaBotAI — decentralizing AI with OriginTrail and Polkadot The explosive rise of Artificial Intelligence has sparked the first stages of a new knowledge revolution, comparable historically with the invention of the printing press or the world wide web. Its sudden growth also pointed at some of the threats and shortfalls such as hallucinations, bias, mishandling of intellectual property rig
PolkaBotAI — decentralizing AI with OriginTrail and Polkadot

The explosive rise of Artificial Intelligence has sparked the first stages of a new knowledge revolution, comparable historically with the invention of the printing press or the world wide web. Its sudden growth also pointed at some of the threats and shortfalls such as hallucinations, bias, mishandling of intellectual property rights and even potential AI model collapses. Both the opportunities and challenges of this knowledge revolution are showing the need for convergence between Crypto, Internet and AI in a Verifiable Internet for AI proposed by OriginTrail. The Verifiable Internet for AI is effectively addressing AI’s shortfalls by decentralizing knowledge that AI systems use in their solutions. One of the pioneering implementations of the approach is Polkabot.AI — a Decentralized AI education hub on Polkadot which will see its full release in the coming months after receiving support from the Polkadot Treasury.

Spearheading the development of Polkabot.ai is Trace Alliance, a collaborative hub that builds partnerships for creating and leveraging trusted knowledge in the age of AI. The solution is bringing the vision of decentralized AI to reality as it revolutionizes how anyone can interact and learn about the Polkadot ecosystem whether they are a novice getting through the first steps or a seasoned user looking for the latest updates. It achieves that by allowing a wider Polkadot ecosystem to get involved in creating a trusted knowledge base that Polkabot’s AI system will use to construct its responses. Unlike the solutions using solely generative AI, Polkabot is implementing a novel decentralized Retrieval Augmented Generation (dRAG) approach leveraging the OriginTrail Decentralized Knowledge Graph (DKG). Instead of Polkabot solely relying on AI to produce a generated response, it uses AI to construct a response using trusted inputs from the DKG populated by the wider Polkadot community. This gives the solution information provenance, respect for data ownership and gives the user a chance to verify each source and its issuer used in the final response. The trusted knowledge Polkabot’s AI systems access will be continuously expanded through community curation and knowledge publishing process called knowledge mining.

The Polkabot.ai initiative received backing through an OpenGov treasury proposal, which was approved on April 23, 2024. As an AI-powered educational platform that tailors learning to each user, PolkaBot.AI represents a significant leap forward in how we interact with Polkadot’s diverse content — whether within the ecosystem or in outward communications — enabling users to access trusted knowledge and get precise responses online.

Stay updated with the latest developments by following PolkaBotAI on X.

PolkaBotAI — decentralizing AI with OriginTrail and Polkadot was originally published in OriginTrail on Medium, where people are continuing the conversation by highlighting and responding to this story.


Identity At The Center - Podcast

Join us in the latest episode of The Identity at the Center

Join us in the latest episode of The Identity at the Center Podcast where we discuss listener questions and critique AI answers. We delve into topics such as key IAM metrics, the challenges of implementing IAM strategies in large multinational companies, and upcoming trends in the IAM sector. You can watch it on YouTube at https://www.youtube.com/@idacpodcast or listen to this episode at https://i

Join us in the latest episode of The Identity at the Center Podcast where we discuss listener questions and critique AI answers. We delve into topics such as key IAM metrics, the challenges of implementing IAM strategies in large multinational companies, and upcoming trends in the IAM sector. You can watch it on YouTube at https://www.youtube.com/@idacpodcast or listen to this episode at https://idacpodcast.com or in your podcast app. Don't forget to share your thoughts with us and thanks to the listeners for their questions.

#iam #podcast #idac


ResofWorld

African universities are failing to prepare tech graduates for jobs in AI

Dozens of new AI training startups are filling the gap, offering online courses, hackathons, and job placement help.
After she graduated with a computer science degree from a state university in Nigeria late last year, Oyinda Olatunji was confident she’d land a job with a local data science...

Sunday, 28. April 2024

Elastos Foundation

Join the Elastos Movement: The Call to the Cyber Republic Council 5

Traditional organisations are often hierarchical, centralising decision-making among a few top-level individuals, which limits broader participation. In contrast, blockchain technology supports decentralised and transparent governance on a distributed ledger, enhancing security, immutability, and trust without central oversight. Decentralised Autonomous Organisations (DAOs) embody decentralisation

Traditional organisations are often hierarchical, centralising decision-making among a few top-level individuals, which limits broader participation. In contrast, blockchain technology supports decentralised and transparent governance on a distributed ledger, enhancing security, immutability, and trust without central oversight.

Decentralised Autonomous Organisations (DAOs) embody decentralisation by distributing decision-making across all global members online, preventing power imbalances and promoting equity. DAOs are community-driven, with each member having a stake and a voice in organisational decisions, aligning individual success with the collective success of the organisation. This incentivises members to prioritise the organisation’s best interest, aligning with Founder Rong Chen’s vision for how Elastos should be run.

 

Elastos’ Cyber Republic DAO

Elastos’ Cyber Republic DAO employs a democratic, blockchain-based election model, fostering decentralised governance within the Elastos ecosystem. Annually, during a one-month election period, all community members holding ELA coins from the Elastos Mainchain, secured by Bitcoin, cast their votes, proportional to their ELA holdings, to elect 12 Cyber Republic Council members. The top 12 candidates receiving the most community votes secure their positions for a term of about one year or 262,800 main chain blocks. After 4 years of running, the fifth-year election begins tomorrow, at block height 1,678,450, a huge milestone!

Joining the Cyber Republic Council offers a unique opportunity to directly shape Elastos’ future and be a part of its decentralised governance history. This role allows you to influence strategic decisions and manage the CR treasury, aligning both individual and community goals within the Elastos ecosystem. It’s an eye-opening experience, offering prestige, leadership recognition, and substantial ELA rewards. If you believe in Elastos’ vision and have the necessary resources, we encourage you to participate in this part of our ecosystem by running and campaigning.

 

How to Run as a Council Member

To run for Council membership, candidates must possess Elastos Digital IDs and register their candidacy on the Essentials wallet. A refundable deposit of 5,000 ELA is required to ensure commitment to governance responsibilities. If elected, this deposit serves as collateral which may be penalised if the member fails to vote on proposals.

Council members earn approximately 35% APY on their 5000 ELA and participate in decision-making, proposal evaluation, and strategic governance. Familiarity with the duties and expectations outlined in the Cyber Republic whitepaper is recommended. Council members manage the CR treasury, which contains 440,782 ELA today, with an additional top-up of 307,903 ELA incoming for the upcoming Cyber Republic Council. They influence Elastos’ development and governance. The role offers prestige, recognition as community leaders, and ELA rewards through Bitcoin-secured mainchain validator incentives.

 

Voting for Council Members

As the Cyber Republic community, we will use our ELA coins for voting over the 30-day elections, with each staked coin, which is not locked and can be withdrawn at any time, equaling one vote. Transparency and communication are vital for gaining community trust and support.

The election starts tomorrow at block height 1,678,450, monitor here! Your participation supports Elastos and strengthens our network. Let’s build a transparent, inclusive future. Vote wisely and thank you for your dedication to the Cyber Republic.

Do you want to run? Act now, join the Cyber Republic Council, and help drive our Elastos community forward. For the latest election news, follow the Cyber Republic Twitter and Elastos Telegram.


Velocity Network

Velocity: The Next Gen Public Infrastructure for Workforce Credential Verification

Dror Gurevich, our CEO has joined Trevor Schachner at SHRM for an exciting episode of SHRM’s WorkplaceTech Spotlight, a SHRM podcast. The post Velocity: The Next Gen Public Infrastructure for Workforce Credential Verification appeared first on Velocity.

Friday, 26. April 2024

OpenID

Shared Signals: Enhanced Security for All

Last month, at the Gartner Identity and Access Management Summit in London, industry leaders showcased successful, interoperable implementations of the Shared Signals Framework (SSF) and Continuous Access Evaluation Profile (CAEP). This included Okta, SailPoint, and Cisco as well as security startups SGNL, VeriClouds, and Helisoft.  The SSF suite of standards underpins Zero-Trust architecture

Last month, at the Gartner Identity and Access Management Summit in London, industry leaders showcased successful, interoperable implementations of the Shared Signals Framework (SSF) and Continuous Access Evaluation Profile (CAEP). This included Okta, SailPoint, and Cisco as well as security startups SGNL, VeriClouds, and Helisoft.  The SSF suite of standards underpins Zero-Trust architectures and promises to enable a more secure digital future for everyone.

The Shared Signals Framework is a Game Changer

Today’s businesses and their users demand seamless access to services. Often, this involves many concurrent logged in sessions to countless applications – and these sessions can last days or even weeks at a time. Over the course of the session, plenty can change:

A user may change their location A malicious application may be found on a device Users may be granted new privileges (or privileges may have been revoked)

Furthermore, there may be suspicious activity on user accounts that have meaningful implications for other dependent services – like an email address that is used to login to many other online services. 

The industry has worked hard over the last decade to make single sign-on and federated identity possible. This greatly improved the experience for users and opened many doors to make adoption of SaaS widespread. However, closing doors when needed was more of an afterthought, and hasn’t been fully solved.

While many security solutions now exist, a lot of actionable data sits siloed within individual tools, applications, and dashboards—and it hasn’t historically traveled across service providers. This lack of data sharing constrains the implementation of a Zero Trust security posture.

Enter SSF: the Solution

The Shared Signals Framework is an open API built upon a suite of protocols that enable applications and service providers to communicate about security events in order to make dynamic access and authorization decisions. It acts as a signaling layer on a back channel that helps to secure sessions at near real-time.

Back in 2019, Google introduced a standards-based approach to continuously evaluating access authorization. The Continuous Access Evaluation Profile (CAEP) created a simpler way for IdPs and services to convey information about a given session. Meanwhile, the OpenID Foundation published the Risk & Incident Sharing and Collaboration (RISC) specification to define a standardized way to communicate account-level risk events. The two initiatives merged within the OpenID Foundation and formed the Shared Signals working group. CAEP and RISC are now profiles on top of the Shared Signals Framework (SSF). 

Now a maturing standard, SSF is an API with a standard format for expressing both account-level and session-level Security Events. It offers seamless, privacy-preserving data-sharing about security events between service providers. Organizations can easily integrate SSF into their security infrastructure and begin sending and receiving Security Events across an ecosystem. This enables organizations to deliver Zero Trust security underpinned by continuous risk assessment efficiently and at scale.

Security through Collaboration

In a recent report by the National Security Agency (NSA) and the Cybersecurity and Infrastructure Security Agency (CISA), the authors note that the Shared Signals Framework is an emerging, promising standard gaining traction in the industry. They state “support for and the development of these standards in the enterprise ecosystem will enable a variety of security use cases, ranging from limiting access to managed devices to quickly revoking access when accounts are compromised.” They further recommend broader support for the development and implementation of identity standards as a crucial underpinning of security.

The interoperability session held at the Gartner IAM Summit in London demonstrates, not only the latest in security protocols, but also the industry’s shift towards collaborative security enabled by Open Standards. By sharing these security events across an ecosystem of trusted parties, organizations have more informed Zero Trust implementations and are empowered to mitigate threats more effectively. 

Input to the Work Group

The OIDF Shared Signals Work Group is very active and welcomes a wider set of requirements from implementors. For example, implementors at April’s Internet Identity Workshop (IIW) discussed the possibility of using the Shared Signals Framework to communicate lifecycle and security signals between participants (issuers, wallets, etc.) in mobile driving license (mDL) and other digital identity ecosystems. Such an approach would lower the burden on issuing authorities to deploy across wallets and ensure their policies are enforced consistently.

Join us to get involved!

About the OpenID Foundation The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate.   Find out more at openid.net.

The post Shared Signals: Enhanced Security for All first appeared on OpenID Foundation.


Elastos Foundation

Cyber Republic DAO LLC: From U.S. Soil to Marshall Islands Sovereignty.

A Vision for Crypto-friendly Jurisdiction Cyber Republic DAO LLC, an entity previously incorporated in Delaware, U.S., has strategically repositioned itself by migrating to the Marshall Islands. This move is driven by a fundamental principle: seeking an environment where the legal framework is not just reactive to technology but proactive in embracing the new digital era. […]
A Vision for Crypto-friendly Jurisdiction

Cyber Republic DAO LLC, an entity previously incorporated in Delaware, U.S., has strategically repositioned itself by migrating to the Marshall Islands. This move is driven by a fundamental principle: seeking an environment where the legal framework is not just reactive to technology but proactive in embracing the new digital era.

The U.S., while technologically advanced, presents a legal landscape fraught with complexities and ambiguities concerning digital currencies and decentralised autonomous organisations (DAOs). The Cyber Republic’s exploration for a jurisdiction that not only understands but advocates for the underlying ethos of cryptocurrency has led to this big shift. The Marshall Islands, contrasting the U.S., provides a pioneering legal infrastructure that explicitly recognises and supports the unique operational needs of DAOs.

 

 

A Quick Cyber Republic DAO Run Down

A DAO, or Decentralised Autonomous Organisation, is an organisational structure built on blockchain technology, where decisions are made from the bottom up, governed by consensus rather than a central authority. The Cyber Republic (CR) is a DAO that specifically governs the Elastos ecosystem, managing its development and community assets through a system of proposals and community consensus.

The Cyber Republic employs a governance system where community members holding ELA, Elastos’ currency secured by Bitcoin’s hash rate since 2018, can participate in decision-making processes. It involves:

Electing Council members yearly through democratic voting to act as proposal managers and decision-makers. Submitting, voting on, and implementing proposals that guide the ecosystem’s development. Using blockchain technology to ensure all actions are transparent, immutable, and verifiable.

The Cyber Republic promotes a robust, fair, and engaged community by decentralising power, encouraging contributions, and managing resources through collective decision-making, ensuring all members influence the direction of the Elastos ecosystem. To learn more, you can read the CR whitepaper here and visit Elastos’ website.

 

 

The Marshall Islands’ Progressive Legislation

Led by Cyber Republic Council Member Sasha Mitchell, with support from Council Member Nenchy, CR Proposal 144 was passed on January 24th, culminating three months of effort to establish this new entity working with MIDAO. Here are the core reasons for selecting the Marshall Islands:

Strategic Location and Sovereignty: Located between Hawaii and Australia, the Marshall Islands leverages its sovereignty and UN membership to set its own policies, making it an attractive location for crypto-focused laws and activities. Global Leadership in Crypto Legislation: The enactment of the Decentralised Autonomous Organisation Act of 2022, along with previous crypto-friendly laws, places the Marshall Islands at the forefront of global crypto legislation, demonstrating a progressive stance towards digital currencies and blockchain technology. Innovative Legal Framework: As the Marshall Islands passed a DAO law, which is one of the first of its kind, it allows for the legal recognition of DAOs as entities that can interact with both the on-chain and off-chain legal systems. This law positions the Marshall Islands as a forward-thinking jurisdiction in the crypto space. Support for Crypto Entrepreneurs: Over 100 DAOs have been established in the Marshall Islands, showing a supportive environment for crypto entrepreneurs, founders, and investors seeking a competitive jurisdiction. Experience with Digital Currencies: The Marshall Islands’ history with digital currencies, including the Sovereign (SOV) as legal tender, despite regulatory and implementation challenges, showcases their long-term commitment to integrating crypto into their economic system. Criticism and Validation: The initial criticism from international financial institutions and subsequent recognition of the importance of digital currencies highlight the Marshall Islands’ pioneering role in the crypto domain. Cryptocurrency and DAO Advantages: The DAO law facilitates web3 and crypto projects by addressing common legal barriers, such as anonymity and decentralised governance, providing a more suitable legal structure for DAOs compared to traditional entities. Economic Diversification: The government views the digitalisation of its economy through laws like the DAO law as a means to achieve economic stability, considering its limited physical resources and the challenges posed by climate change. Global and Local Benefits: The DAO framework not only benefits the global crypto community by offering a compliant and progressive jurisdiction but also promises economic benefits and technological advancement for the Marshall Islands, positioning it as a key player in the digital economy.

But don’t just take our word for it; watch Balaji Srinivasan, the former Chief Technology Officer of Coinbase, former general partner at the venture capital firm Andreessen Horowitz, and author of “The Network State,” share his thoughts on the Marshall Islands in this video here!

A New Era in Blockchain Governance

The relocation of Cyber Republic DAO to the Marshall Islands marks a significant pivot in the structure of digital organisations. It’s a move that reflects the core values of Elastos—decentralisation, transparency, and democratic participation. This transition is supportive as the Cyber Republic heralds the DAO’s fifth council year, powered by the collective voice of its global community. This evolution in governance is not merely an administrative change but a demonstration of a system where decision-making is distributed across its membership, each of whom holds a stake in the direction and success of the DAO.

Strategic Relocation and Legal Empowerment

This strategic relocation underlines a broader shift in the cryptocurrency landscape, where the influence of traditional jurisdictions like the U.S. is balanced by smaller states through innovative legislation. The Marshall Islands, by welcoming the Cyber Republic, becomes a pivotal hub in the digital economy. This proactive step towards a crypto-friendly future empowers the DAO with the legal authority to forge binding agreements, opening doors to growth opportunities that harmonise individual and collective goals, such as moving forward on Proposal 151 on Market Making and 152 on Strategic Investment. It’s a forward leap that not only aligns with the vision of Elastos’ founder Rong Chen but also stands as a collective achievement for the community, setting a course for the future of Web3 and decentralised digital ecosystems. Congratulations to the community, the Cyber Republic and the CR Council on this milestone!

In a few days, the Cyber Republic Council will begin its 5th year elections! You can be a part of the decision-making that drives our community forward. Follow us for the latest updates on upcoming elections information and guidance.

 

Thursday, 25. April 2024

Ceramic Network

Ceramic Feature Release: SET Account Relations, Immutable Fields and shouldIndex flag

The powerful new features in Ceramic and ComposeDB offer users a sophisticated toolkit for data management. Explore the SET account relation, immutable fields and the benefits of shouldIndex flag.

The functionality of Ceramic and ComposeDB has been recently enhanced by a number of new features that give developers more control over account relation definitions and data accessibility. More specifically, you can now use the following tools to enhance your applications:

SET account relation - enabling users to enforce a constraint where each user account (or DID) can create only one instance of a model for a specific record of another model. Immutable fields - allow specific data to be prevented from being altered. shouldIndex flag - gives developers an option to manage the data visibility by choosing which fields should be indexed.

In this blog post, we are going to dive into these features in more detail. For a video walkthrough, check out this video tutorial.

SET account relations

SET relations in ComposeDB enable developers to define relations between the data models that follow specific constraints and include the user as part of the relationship. SET account relation allows users to enforce the constraint that a specific account (DID) can have only one instance of a model for a specific record of another model.

The best example to illustrate the “like” feature of a social media application. SET relation can be utilized to make sure that the user (DID) can “like” a specific post only once, while at the same time allowing the user to like multiple posts.

Let’s have a look at how SET Relations can be used in practice.

Ceramic Layer

To use SET account relation in Ceramic, you will first have to define a SET accountRelation in your model definition. An example below consists of two simple models - POST_MODEL representing the model definition for social media posts, and LIKE_MODEL representing the model definition for users liking the posts.

The model definition for POST_MODEL has the accountRelation as a list, meaning that one user account will be allowed to create multiple posts.

The model definition for LIKE_MODEL has a SET accountRelation and includes the fields which should be used to create the unique relation - postID and userID. This defines that a specific user can create only one "like" record for a specific post.

const POST_MODEL: ModelDefinition = { name: 'Post', version: '2.0', interface: false, implements: [], accountRelation: {type: 'list'}, schema: { $schema: '<https://json-schema.org/draft/2020-12/schema>', type: 'object', additionalProperties: false, properties: { content: {type: 'string', maxLength: 500}, author: {type: 'string'}, }, required: ['content', 'author'], }, } const LIKE_MODEL: ModelDefinition = { name: 'Like', version: '2.0', interface: false, implements: [], accountRelation: {type: 'set', fields: ['postID', 'userID']}, schema: { $schema: '<https://json-schema.org/draft/2020-12/schema>', type: 'object', additionalProperties: false, properties: { postID: {type: 'string'}, userID: {type: 'string'}, }, required: ['postID', 'userID'], }, } ComposeDB Layer

Now let's see an example of how you can use SET account relations in ComposeDB. Similar to the example above, the key component that allows you to define the SET account relation for a specific model is the accountRelation scalar alongside the fields that should be used to define the unique relation.

Take the example below. Here we have two models defined using GraphQL schema definition language. The first model is a model for storing data about a Picture - the source and the dimensions of the image. The model definition Favourite implements the behavior of the user setting a picture as a favorite. Note that this model has an accountRelation defined as SET. The field that is used to define the relation is docID, which refers to the document ID of the picture record.

type Picture @createModel(description: "A model for pictures", accountRelation: SINGLE) { src: String! @string(maxLength: 150), mimeType: String! @string(maxLength: 50), width: Int! @int(min:1), height: Int! @int(min:1), size: Int! @int(min:1), } type Favourite @createModel(description: "A set of favourite documents", accountRelation: SET, accountRelationFields: ["docID"]){ docID: StreamID! @documentReference(model: "Picture") doc: Node @relationDocument(property: "docID") note: String @string(maxLength: 500) }

All this means that the user will be able to set only one image as a favorite. They can set different pictures as favorites, but only one record per picture.

Immutable Fields

Another feature that has been recently added to Ceramic is Immutable Fields. With Immutable Fields, you are able to define which fields (for example, some critical data) should remain unchangeable no matter what and be accessible as read-only data. Any attempt to alter the data set as immutable would result in an error message.

Ceramic Layer

Defining specific fields as immutable is pretty simple. Below we have an example of a simple model defining a Person - their address, name, and other details. To make these fields immutable you simply need to include them into the immutableFields array. In the example below, fields like address, name, myArray, and myMultipleType will be set as immutable, meaning that once this data is created, it will be unchangeable:

const example_model : ModelDefinition = { name : 'Person', views : {}, schema : { type : 'object', $defs : { Address : { type : 'object', title : 'Address', required : [ 'street', 'city', 'zipCode' ], properties : { city : {type : 'string', maxLength : 100, minLength : 5}, street : {type : 'string', maxLength : 100, minLength : 5}, zipCode : {type : 'string', maxLength : 100, minLength : 5}, }, additionalProperties : false, }, }, $schema : '<https://json-schema.org/draft/2020-12/schema>', required : [ 'name', 'address' ], properties : { name : {type : 'string', maxLength : 100, minLength : 10}, address : {$ref : '#/$defs/Address'}, myArray : {type : 'array', maxItems : 3, items : {type : 'integer'}}, myMultipleType : {oneOf : [ {type : 'integer'}, {type : 'string'} ]}, }, additionalProperties : false, }, version : '2.0', interface : false, relations : {}, implements : [], description : 'Simple person with immutable field', accountRelation : {type : 'list'}, immutableFields : [ 'address', 'name', 'myArray', 'myMultipleType' ], } ComposeDB Layer

In ComposeDB, a specific field can be set as immutable by adding a directive @immutable to the fields that should remain unchangeable. For example:

type ModelWithImmutableProp@createModel( accountRelation: SINGLE, description: "Test model with an immutable int property" ) { uniqueValue: Int @immutable uniqueValue2: Int @immutable } }

Here, we set that fields uniqueValue and uniqueValue2 are going to be immutable.

shouldIndex Flag

Last but not least, let’s talk about the shouldIndex Flag available in Ceramic and ComposeDB. shouldIndex Flag allows you to control the stream indexing by toggling a boolean metadata flag. It enables you to manage data visibility and indexing. By setting the shouldIndex Flag to false, you can disable the stream from being indexed, making it “invisible” for indexing operations. Let’s take a look at how you can use this feature.

Ceramic Layer

When working with model documents, ModelInstanceDocument, there is a new method called shouldIndex(boolean-value) where false would indicate the stream corresponding to this model should not be indexed, and can be called with value = true to reindex an existing document, e.g.:

const document = await ModelInstanceDocument.create(ceramic, CONTENT0, midMetadata) // Unindex await document.shouldIndex(false) ComposeDB Layer

There are two ways to signal that a stream shouldn’t be indexed using ComposeDB. The first one is by including the shouldIndexoption in a mutation query, setting it to true if it should be indexed and set to false in the contrary:

const runtime = new ComposeRuntime({ ceramic, context, definition: composite.toRuntime() }) await runtime.executeQuery<{ updateProfile: { viewer: { profile: { name: string } } } }>(` mutation UpdateProfile($input: UpdateProfileInput!) { updateProfile(input: $input) { viewer { profile { name } } } } `, { input: { id: profileID, content: {}, options: { shouldIndex: false } } }, )

The second way is to use a mutation type called enableIndexing, just like a create or update mutation, it should be paired with the model’s name, sending the streamId and shouldIndex value as part of the input, e.g.:

const enableIndexingPostMutation = `mutation EnableIndexingPost($input: EnableIndexingPostInput!) { enableIndexingPost(input: $input) { document { id } } }` await runtime.executeQuery<{ enableIndexingPost: { document: { id: string } } }> enableIndexingPostMutation, { input: { id, shouldIndex: false } },)

Note that the shouldIndex flag doesn’t delete the data. If set to false, the stream will still exist on the network; however, it will not be indexed and will not be available for data interactions.

Summary

The powerful new features in Ceramic and ComposeDB offer users a sophisticated toolkit for data management. From enforcing unique constraints with SET account relations to securing key data with immutable fields and controlling indexing operations using the shouldIndex flag, these features empower developers to build robust and efficient data models for their applications. Check out the Ceramic documentation for more information and examples.

Let us know how you are using all of these new features by posting on our Ceramic developer community forum.

Ceramic Resources

Developer Documentation: https://developers.ceramic.network/

Discord: https://chat.ceramic.network/

Github: https://github.com/ceramicnetwork

Twitter: https://twitter.com/ceramicnetwork

Website: https://ceramic.network/

Forum: https://forum.ceramic.network/


Hyperledger Foundation

Introducing the Telecom Decentralized Identity Network (TDIDN)

This solution brief introduces the groundbreaking concept of a Telecom Decentralized Identity Network (TDIDN), a new way to improve identity management using decentralized identifiers (DID) and blockchain. This brief explores how TDIDN's innovative approach can improve security, efficiency, and privacy in telecom operations.

This solution brief introduces the groundbreaking concept of a Telecom Decentralized Identity Network (TDIDN), a new way to improve identity management using decentralized identifiers (DID) and blockchain. This brief explores how TDIDN's innovative approach can improve security, efficiency, and privacy in telecom operations.


ResofWorld

The TikTok ban is bigger than just ByteDance

The U.S. clampdown on Chinese-owned companies won't end here.
After years of trying, the U.S. has finally issued ByteDance with an ultimatum: Sell TikTok or the app will be banned. The bill passed Congress with overwhelming support on Tuesday...

Singapore’s EV challenge: High prices and high-rises

The country’s new EV push is clashing with long-standing policies to discourage car ownership in general.
For Koh Jie Ming, the decision to switch to an electric vehicle five years ago was easy — there were significant savings to be had. But there were no EV...

Elastos Foundation

Embracing Bitcoin’s Promise through the BIT Index and Elastos’ Innovations

Bitcoin fundamentally transforms access to financial services by operating on a decentralized network, where trust is built through cryptographic proofs rather than a centralised authority. This is crucial in places like Nigeria and Brazil, where traditional banking either falls short or imposes prohibitive costs on users. Bitcoin’s promise, therefore, isn’t just theoretical but a practical […]

Bitcoin fundamentally transforms access to financial services by operating on a decentralized network, where trust is built through cryptographic proofs rather than a centralised authority. This is crucial in places like Nigeria and Brazil, where traditional banking either falls short or imposes prohibitive costs on users. Bitcoin’s promise, therefore, isn’t just theoretical but a practical solution to real economic constraints faced by millions. Its potential to safeguard assets against inflation and political instability makes it particularly appealing in these markets.

 

The Creation and Significance of the BIT Index

Elastos launched the BIT Index to track the understanding and usage of Bitcoin globally. This index serves a vital role by providing empirical data on how deeply Bitcoin is ingrained in the daily lives of its users, especially in tech-savvy demographics across various countries. By focusing on markets where digital currencies could leapfrog traditional financial systems, the BIT Index highlights areas where Bitcoin’s impact is most profound, quantifying trust and optimism in contrast to established economies. See here!

 

The Integration of BeL2 and Elastos’ Technologies to Enhance Bitcoin

Elastos redefines the structural framework of the internet with its SmartWeb platform, where Bitcoin’s blockchain plays a central role, being merged-mined and secured by Bitcoin’s network, a concept suggested by Satoshi Nakamoto in 2010. This opens up a new symbiotic door, where Bitcoin’s utility and energy resources can be enhanced in ways not previously considered possible, enhancing ‘Be Your Own Bank’ into ‘Be Your Own Internet,’ allowing you to control and monetise your data with Bitcoin in a world that is increasingly being digitised.

BeL2 by Elastos introduces a ‘BTC Oracle’ and a second-layer interoperability solution to Bitcoin’s network, allowing a data feed to be generated on Bitcoin that can communicate with Ethereum blockchains, making deals, executing contracts, and expanding its influence without ever compromising its integrity. As a result, Bitcoin users globally can extend the utility of their Bitcoin to interact with services and operate with reduced fees and increased speed. By using a layer atop the existing Bitcoin blockchain, BeL2 enables Layer 2 solutions which simplify transactions and further reduce the need for intermediaries, aligning with decentralisation and user empowerment.

BeL2’s ability to enable smart contracts directly on Bitcoin transforms it from a passive asset into an active tool for decentralised applications (dApps). This capability allows Bitcoin to engage in complex financial transactions and agreements, directly on its blockchain, without the need to convert into other crypto tokens. But what’s more, BeL2’s BTC Oracle data feed could be expanded to support decentralized indices, such as enhancing the BIT Index, by providing crucial privacy-preserving insights to stakeholders in Bitcoin markets regarding transactions, volume, and value.

Additionally, this data feed could offer market liquidity measures, real-time alerts for unusual activity, and insights into network health metrics. It could also facilitate risk management by identifying transaction patterns and market depth, and support regulatory compliance by aggregating anonymized data for AML and CFT compliance. So, after initial research has been conducted, what have we discovered? Let’s dive in!

 

Evaluating the Current Impact and Future Trajectory of Bitcoin with BeL2 and Elastos

According to the inaugural BIT Index, 20% of Nigerian consumers use Bitcoin daily, which is significantly higher than the global average of 15%. In the UAE, daily usage stands at 20%, compared to only 8% and 9% in more developed markets like Germany and the UK, respectively. This stark difference underscores the effectiveness of Bitcoin in fulfilling the financial needs of users in emerging markets, where traditional banking systems often fail to meet these needs due to high fees or lack of accessibility. As BeL2 broadens Bitcoin’s capabilities and integrates it into more daily transactions and contracts, the BIT Index could track an increase in Bitcoin’s usage and trust levels, particularly in regions eager for more sophisticated financial tools.

The future of Bitcoin, particularly in these emerging markets, looks optimistic. The BIT Index shows that 78% of Nigerian respondents and 70% of Brazilians believe Bitcoin’s usage and value will continue to rise. This optimism, coupled with the technological advancements provided by BeL2 and Elastos, sets the stage for a more robust adoption of Bitcoin. As these technologies mature, they are likely to catalyze the next wave of innovations in financial services, making Bitcoin not only a store of value but also a cornerstone for new economic interactions on the SmartWeb.

The integration of Bitcoin with technologies like BeL2 and the strategic insights provided by the BIT Index paint a promising picture for the future of decentralized finance. Through a simplified, secure, and user-centric approach, Bitcoin is poised to transform the economic landscape, especially in regions that stand to benefit the most from its promises.

Interested in staying up to date? Follow Elastos here and join our live community telegram.


LionsGate Digital

PLEASE UPDATE THE RSS FEED

The RSS feed URL you're currently using https://follow.it/lions-gate-digital-advocacy will stop working shortly. Please add /rss at the and of the URL, so that the URL will be https://follow.it/lions-gate-digital-advocacy/rss

The RSS feed URL you're currently using https://follow.it/lions-gate-digital-advocacy will stop working shortly. Please add /rss at the and of the URL, so that the URL will be https://follow.it/lions-gate-digital-advocacy/rss

Wednesday, 24. April 2024

EdgeSecure

Edge’s Chief Digital Learning Officer Joins United States Distance Learning Association (USDLA) Public Policy Committee; Confirmed to Present at 2024 National USDLA Conference

The post Edge’s Chief Digital Learning Officer Joins United States Distance Learning Association (USDLA) Public Policy Committee; Confirmed to Present at 2024 National USDLA Conference appeared first on NJEdge Inc.

NEWARK, NJ, April 24, 2024 – Edge’s Chief Digital Learning Officer, Joshua Gaul, joins an esteemed group of distance learning practitioners, vendors, and individuals with an academic interest in distance learning pedagogy, on the United States Distance Learning Association (USDLA) Public Policy Committee. 

USDLA is the premier professional membership organization designed to advocate and support the needs of distance education leaders. USDLA’s resources support the Distance Education Professional Community who serve education, business, health, and government. Founded in 1987, its vision and mission to advocate, research, and share best practices in the utilization of distance learning modalities in education, business, health, and government nationally and internationally continues today.

Notes Gaul, Chief Digital Learning Officer, Edge, “I’m proud to join the diverse group of members on the USDLA’s Public Policy Committee. Our shared passion for all-things-distance-learning will enable us to focus on the countless public policy issues related to digital/distance learning that are top-of-mind with the USDLA Board of Directors and its members.”

“I’m proud to join the diverse group of members on the USDLA’s Public Policy Committee. Our shared passion for all-things-distance-learning will enable us to focus on the countless public policy issues related to digital/distance learning that are top-of-mind with the USDLA Board of Directors and its members.”

— Joshua Gaul
Chief Digital Learning Officer, Edge

Josh will serve as a presenter at the 2024 National Conference for the 37th edition of the National Distance Learning Conference. With a theme of Gateway to the Future of Distance and Digital Learning, the event will take place June 17-20, 2024 at the Marriott St. Louis Grand. Josh’s presentation, Sustainable Online Learning for All: Developing and Accelerating High-Quality Online Programs with Nonprofit Consortium will take place June 18 at 3:30 pm. Those interested in registering for the event, may do so via https://usdla.org/2024-conference-registration/.

To learn more about the USDLA, visit https://usdla.org/about/history/

About Edge

Edge serves as a member-owned, nonprofit provider of high performance optical fiber networking and internetworking, Internet2, and a vast array of best-in-class technology solutions for cybersecurity, educational technologies, cloud computing, and professional managed services. Edge provides these solutions to colleges and universities, K-12 school districts, government entities, hospital networks and nonprofit business entities as part of a membership-based consortium. Edge’s membership spans the northeast, along with a growing list of EdgeMarket participants nationwide. Edge’s common good mission ensures success by empowering members for digital transformation with affordable, reliable and thought-leading purpose-built, advanced connectivity, technologies and services.

The post Edge’s Chief Digital Learning Officer Joins United States Distance Learning Association (USDLA) Public Policy Committee; Confirmed to Present at 2024 National USDLA Conference appeared first on NJEdge Inc.


ResofWorld

Why IBM employees in Brazil are suing to be classified as tech workers

Unions in two Brazilian states are suing IBM in an effort to be recognized as tech employees, giving them access to better wages, benefits, and profit participation.
When IBM posted an ad for a remote job in Brazil last September, it included a caveat: “IBM, for institutional reasons, will not hire residents of Minas Gerais, even though...

Next Level Supply Chain Podcast with GS1

Behind the Barcode: Enhancing Supply Chain Efficiency with GS1 US Data Hub

In this episode of Next Level Supply Chain’s special series Behind the Barcode, Liz and Reid speak with Jess Urriola, the VP of Product Management at GS1 US, about GS1 US Data Hub - a tool built to secure data sharing between brand owners and trading partners. They talk about why this service is pivotal for maintaining data quality across the global supply chain, both from the strategic and techni

In this episode of Next Level Supply Chain’s special series Behind the Barcode, Liz and Reid speak with Jess Urriola, the VP of Product Management at GS1 US, about GS1 US Data Hub - a tool built to secure data sharing between brand owners and trading partners. They talk about why this service is pivotal for maintaining data quality across the global supply chain, both from the strategic and technical aspects, from creating UPC barcodes to combating GTIN misuse.

Learn how GS1 US Data Hub supports regulatory compliance, the importance of global location numbers in traceability, and the benefits of ensuring each product has a unique identifier—similar to a license plate. 

 

Key takeaways: 

How GS1 US Data Hub's user-friendly SaaS platform empowers both small and large businesses to effortlessly manage and authenticate product identifiers and location data, ensuring accuracy and trust throughout the global supply chain.

The critical role of the GS1 Registry Platform (GRP) in combating Global Trade Item Number (GTIN) misuse and fostering global transparency by enabling real-time, cross-border verification of core product attributes for reliable traceability and inventory management.

The strategic advantages of GS1's identification system as Jess Urriola highlights its integral part in compliance with healthcare and food safety regulations, streamlining the entire traceability process from manufacturer to end-point via Global Location Numbers (GLNs).

 

Resources: 

GS1 US’s Data Hub

Data Hub Help Center

Connect with GS1 US:

Our website - www.gs1us.org

GS1 US on LinkedIn

 

Connect with guest:

Jess Urriola on LinkedIn


MyData

International consortium set to advance AI & Multimodal data integration in personalised cardiovascular medicine.

The NextGen project kicked off at the beginning of 2024. MyData Global teamed up with clinical research centres, universities, professional associations, SMEs and non-profits in this EU Horizon Europe project to develop the next-generation tools for genome-centric multimodal data integration in personalised cardiovascular medicine.  The main trends in global demographics and health impact on t
The NextGen project kicked off at the beginning of 2024. MyData Global teamed up with clinical research centres, universities, professional associations, SMEs and non-profits in this EU Horizon Europe project to develop the next-generation tools for genome-centric multimodal data integration in personalised cardiovascular medicine.  The main trends in global demographics and health impact on the […]

Digital Identity NZ

Climbing the mahi mountain | April Newsletter

April brings change in many forms - not just shorter days and falling leaves from our deciduous trees. The post Climbing the mahi mountain | April Newsletter appeared first on Digital Identity New Zealand.

Kia ora,

April brings change in many forms – not just shorter days and falling leaves from our deciduous trees. For business it’s not only a new trading quarter and for some a new financial year, but also the arrival of policy exposure drafts from Government agencies that sometimes stack three, four or five deep. This is hard enough for people or organisations, but for consensus-driven community/industry groups it is quite daunting. I’ve always found it strange that while agencies recognise the value of group responses, additional time for the process is not afforded to them by default. However, it’s great that most agencies do so when approached. You know who you are, so thank you!

Over a third of DINZ’s 100 strong membership (a warm welcome to ABCorpCo-operative Bank and RBNZ our newest members) participate in the DISTF Working Group where a draft response to the targeted consultation on the final revision of the Trust Framework Rules is well underway. Additionally, responses to OPC’s Biometrics code exposure draft are being initaited in the DINZ Biometrics Special Interest Group. The Regulatory & Policy sub committee of DINZ’s Exec Council ran out of time to provide an initial response to the Commerce Commission’s Personal Banking Services Market Study regarding the roadblocks that might impede the June 2026 milestone recommendation for banks to participate in the Digital Identity framework as credential providers. The closing dates for all these initiatives are very similar. In a nutshell, that’s our mahi mountain for the next four weeks. This last one reminds me just how often digital identity is raised these days. Look at the FintechNZ Deloitte Pulsecheck and HorizonPoll as further examples. This great post from DINZ member IDVerse indicates it’s similar across the Tasman.

Earlier this month, Health NZ treated us to a great lunchtime webinar followed by the joint DINZ – Payments NZ members digital identity investigative sprint, which commenced with an orientation webinar last week. We had 80 on the call! This sprint serves as a prelude to a workshop next month to surface the digital identity related issues that people encounter in the payments industry. The goal is to develop best practice requirements to overcome these issues as part of PaymentsNZ’s Next Generation Payments programme.

On Tuesday 7 May, I’ll be at the NZ Government Data Summit moderating a panel with DINZ members MyMahi, MoE and DIA. DIA is up next at DINZ on Tuesday 14 May as it walks us through the Digital Identity Services Trust Framework and how it supports safe and effective digital identity services in Aotearoa. Register for it here.

Furthermore, I’m excited to announce that our sponsorable Podcast series is ready so please get in touch to drive forward Aotearoa’s digital identity ecosystem with your thought leadership. 

And finally, while there will be several smaller webinars and workshops in the coming months, it’s great to see DINZ’s annual flagship event – the Digital Trust Hui Taumata at Te Papa 13 August – taking shape with tickets on sale from today! 

Get in touch if you would like to sponsor, and or present.

Ngā mihi

Colin Wallis
Executive Director, Digital Identity NZ

Read full news here: Climbing the mahi mountain | April Newsletter

SUBSCRIBE FOR MORE

The post Climbing the mahi mountain | April Newsletter appeared first on Digital Identity New Zealand.

Tuesday, 23. April 2024

Ceramic Network

Upgrade your Ceramic node to v5.3+ before the 6th of May

Upgrade your Ceramic node to v5.3+ before the 6th of May

As part of our ongoing commitment to reliability, scalability, and performance, the Ceramic core developers have spent a lot of the last year making significant architectural improvements to our Ceramic Anchor Service (CAS). To further advance the goal of a reliable and performant anchoring system, the CAS is moving away from IPFS. This change requires developers to upgrade their Ceramic nodes to v5.3+ by 6th of May, 2024 to avoid potential data loss.

About the change

The Ceramic team is getting closer to releasing Ceramic’s Recon protocol, intended to dramatically improve the performance, scalability, and reliability of cross-node data synchronization. Utilizing our new custom Recon protocol requires moving away from IPFS pubsub as the mechanism for synchronizing Streams in Ceramic. In preparation for this change, the Ceramic team has built a new direct HTTP-based mechanism for sharing Time Events from the CAS to Ceramic nodes. This means that newer Ceramic nodes no longer have any dependency on IPFS pubsub to learn about Time Events from the CAS. Older Ceramic nodes, however, may still be relying on pubsub to learn about Time Events. Since the correct delivery of Time Events is a requirement to avoid data loss, it is important that all Ceramic nodes on the network be upgraded to at least v5.3 by the 6th of May, 2024, to avoid potential data loss.

What does this mean to Ceramic developers?

Developers must upgrade their Ceramic nodes to v5.3+ by the 6th of May, 2024. After the 6th of May, the CAS will no longer publish Time Events to IPFS pubsub which means that if you don’t upgrade to v5.3+, your node won’t get notifications about anchor successes, which in turn may result in data loss.

How to get support?

If you need support regarding the upgrade or choosing the best path forward for your project, please create a post on Ceramic developer forum with your project details and the core Ceramic team members will reach out to you.


ResofWorld

2023 Impact Report

The global reach and impact of our journalism in 2023

Hyperledger Foundation

Just Introduced: Refreshed Hyperledger Project Logos

Last year, we introduced a new look for Hyperledger Foundation that fit with our increasingly mature market and ecosystem as well as our enterprise audience. Now, with the input of our developer communities, we have refreshed and, in some cases, reinvented the logos for our 13 projects. 

Last year, we introduced a new look for Hyperledger Foundation that fit with our increasingly mature market and ecosystem as well as our enterprise audience. Now, with the input of our developer communities, we have refreshed and, in some cases, reinvented the logos for our 13 projects. 


ResofWorld

TSMC’s debacle in the American desert

Missed deadlines and tension among Taiwanese and American coworkers are plaguing the chip giant’s Phoenix expansion.
Bruce thought he’d landed his dream job. The young American engineer had been eager for a stable, high-paying job in the semiconductor industry. Then, in late 2020, he received a...

What it takes to raise a $300 million VC fund for Africa

Tidjane Deme talks about convincing investors to back Africa-focused tech funds amid a global funding slump.
Tidjane Deme is a general partner at Partech, a global tech investment firm. He co-leads the firm’s Africa fund, which raised over $300 million in February. This interview has been...

Monday, 22. April 2024

FIDO Alliance

NIST cites phishing resistance of synced passkeys in Digital Identity Guidelines update

Andrew Shikiar, FIDO Alliance Executive Director & CEO Adoption of passkeys has grown rapidly since the introduction of sync capabilities less than two years ago, with passkeys being offered by […]

Andrew Shikiar, FIDO Alliance Executive Director & CEO

Adoption of passkeys has grown rapidly since the introduction of sync capabilities less than two years ago, with passkeys being offered by a large and growing proportion of the world’s most visited websites and services. This adoption has come in large part because passkeys offer a true password replacement, helping address the well-known security and user experience weaknesses of knowledge-based authentication like passwords and even other second-factor methods like SMS OTPs.

Market adoption of new technology naturally moves faster than the associated policy and regulatory guidance – which for user authentication still generally reflects the password-centric worldview from when such guidance was developed. This is why we are excited that NIST has taken a lead amongst government agencies and moved quickly to provide new supplemental guidance confirming that synced passkeys meet Authentication Assurance Level 2 (AAL2).

This new NIST guidance makes clear that passkeys – like other FIDO authenticators – can support both AAL2 and AAL3 requirements. Synced passkeys can be AAL2 and device-bound passkeys can be AAL3.

Crucially, the NIST supplement also cites that synced passkeys deployed in a manner consistent with the guidelines as being phishing resistant. This has obvious benefits in a world where 87% of hacking-related breaches are caused by weak or stolen passwords and where there has been a 967% rise in credential phishing since 2022.

Passkey adoption to be boosted by the ‘reassurance of assurance’

While the rate of passkey adoption to date has been nothing short of phenomenal, some organizations – particularly those in regulated industries – understandably want to see that key government bodies accept and recommend new technologies like passkeys before supporting them at scale.  

We have heard this from our partners and constituents across the globe about NIST in particular, whose digital identity guidelines are a global gold standard that are frequently cited by other countries. Today’s supplemental guidance from NIST stands to remove a critical barrier to passkey adoption, which now stands to be further accelerated.

However, there is still work to do. We are working closely with other agencies across the globe to educate them about passkeys and the importance of phishing-resistant authentication, and are encouraging them to update legacy policies, guidelines, and regulations to ultimately allow all organizations, wherever they are, to confidently provide more secure and more convenient authentication to their users and customers. 

Building NIST guidance into business best practices

Identity and authentication architects should contemplate NIST’s supplemental guidance as part of their broader digital identity strategy. For example, for every use case where password + OTP was used in the past, a synced passkey deployed in accordance with the new NIST guidance is not only sufficient to meet AAL2 requirements, but also more effective. In the vast majority of deployment scenarios, synced passkeys will provide a significant security and UX improvement over today’s authentication patterns – almost all of which are susceptible to phishing.

If organizations have specific business, regulatory, or other security requirements, they can choose whether to accept a synced passkey as the primary authentication method, a second factor, pair it with a risk engine, or require a device-bound key. Today’s guidance frees architects up from thinking about authentication layers and to instead focus on business requirements and related threat models. And today’s primary threat model of phishing and social engineering can be directly addressed by utilization of passkeys.


Elastos Foundation

How Elastos is Pioneering Charitable Donations with ScanGive: A SmartWeb Innovation!

We’re excited to share how Elastos technology is spearheading a new era of charitable giving with our latest collaboration on the ScanGive platform. This partnership reflects our commitment to leveraging SmartWeb (Web3) innovations to enhance everyday actions—like making donations—secure, efficient, and transparent. Join us as we delve into this groundbreaking initiative. Transforming Charitable Do

We’re excited to share how Elastos technology is spearheading a new era of charitable giving with our latest collaboration on the ScanGive platform. This partnership reflects our commitment to leveraging SmartWeb (Web3) innovations to enhance everyday actions—like making donations—secure, efficient, and transparent. Join us as we delve into this groundbreaking initiative.

Transforming Charitable Donations with Elastos and ScanGive:

Making a charitable contribution should be straightforward, secure, and impactful. That’s the vision we realized with ScanGive. By integrating Elastos technology, the platform enables users to make donations effortlessly via a simple QR code scan, all underpinned by the robust security of the Elastos infrastructure.

Elastos-Enabled Solutions on ScanGive:

Elastos’ role in revolutionizing this platform is comprehensive, ensuring that each aspect of your donation is managed meticulously:

Secure Transactions with Elastos’ Web3Essentials Wallet: The Web3Essentials Wallet is at the core of ScanGive, providing a secure environment for managing all your transactions, whether you are donating, receiving, or holding funds. Efficient Identity Verification with Elastos eKYC Decentralized ID (DID): Leveraging our decentralized ID technology, ScanGive offers a quick verification process, allowing for immediate tax benefits without compromising personal privacy. Universal Currency Support Through The Metaverse Bank: Thanks to our partnership with The Metaverse Bank, ScanGive supports donations in any currency, including cryptocurrencies. This collaboration ensures transparency and efficiency, regardless of currency type or donor location.

 

Tax Benefits and Global Accessibility:
Utilizing Elastos technologies, ScanGive simplifies the process for donors to claim tax benefits. In jurisdictions like the UK, where donations can be offset against tax, our platform automates these rebates, making it hassle-free for donors.

Our Ongoing Commitment and Future Plans:
ScanGive is currently being tested in various religious establishments throughout the UK, with plans to broaden these trials to more sectors. These initial implementations are essential for refining the platform and illustrating the tangible benefits of integrating SmartWeb (Web3) technology into charitable giving.

 

Visit ScanGive Today!

MOBI

MOBI and Gaia-X 4 moveID Web3 Interoperability Initiative

MOBI and Gaia-X 4 moveID Web3 Interoperability Initiative Global Consortia Collaborate to Implement Trusted Self-Sovereign Identities and Verifiable Credentials for Vehicles and Batteries Enabling Circular Economy Los Angeles, 22 April 2024 — MOBI and Gaia-X 4 moveID proudly announce a joint initiative aimed to advance cross-industry [...]

MOBI and Gaia-X 4 moveID Web3 Interoperability Initiative

Global Consortia Collaborate to Implement Trusted Self-Sovereign Identities and Verifiable Credentials for Vehicles and Batteries Enabling Circular Economy

Los Angeles, 22 April 2024 — MOBI and Gaia-X 4 moveID proudly announce a joint initiative aimed to advance cross-industry interoperability. The initiative focuses on the joint implementation of two pioneering MOBI standardsMOBI Vehicle Identity (VID) and MOBI Battery Birth Certificate (BBC). More specifically, the initiative centers around linking physical objects — e.g., vehicles and their parts such as batteries — to Web3 digital identities and credentials. The goal is to demonstrate technical interoperability using World Wide Web Consortium (W3C) open-standards to establish a secure Self-Sovereign Data and Identities framework. The collaboration builds upon W3C Decentralized Identifiers (DIDs) and Verifiable Credentials (VCs) to manage identities of vehicles and vehicle parts along with their credentials for transactions and claims, foundational for a circular economy.

Gaia-X 4 moveID is set to fundamentally reshape the landscape of mobility and vehicle services, fostering a unified digital ecosystem with Self-Sovereign Identities (SSI) for all traffic participants. By crafting the essential technical and economic foundations, moveID is leading the charge toward a decentralized, user-focused mobility service ecosystem, leveraging SSI and Distributed Ledger Technology (DLT) alongside open-source software and standards. This pioneering ecosystem is designed to support a wide range of applications, including peer-to-peer parking and charging infrastructures, ensuring seamless data availability for smart mobility services, and creating open markets for data providers, AI applications, and mobility infrastructure services.

MOBI, a global nonprofit Web3 consortium, unites public and private sector efforts to create cross-industry interoperability through the creation of standards and building of critical infrastructure for Self-Sovereign Data (SSD), SSI, and verifiable transactions. This infrastructure comprises two federated networks: Citopia, a decentralized marketplace leveraging VCs, and the Integrated Trust Network (ITN), a DIDs Registry. Citopia and the ITN work together to deliver regulatory-compliant and standardized communication protocols for near-real-time transactions. Implemented in tandem, MOBI VID and MOBI BBC standards form the groundwork for a globally interoperable, open, and connected ecosystem; critical to unlocking a vast array of new applications for a resilient and circular economy.

Within the circular economy value chain, countless public and private stakeholders operate with distinct processes for managing sensitive business and consumer data. In complex, connected networks where trust incurs significant frictional costs, current centralized solutions are unable to provide adequate interoperability and extensibility needed to enable secure data exchange and authentication between stakeholders. The initiative will demonstrate the effectiveness of a Web3-enabled approach rooted in global standards by equipping vehicles and vehicle parts — such as batteries — with trusted self-sovereign identities for verifiable claims and transactions.

The initiative’s future potential is promising, with plans to explore standardized know-your-business (KYB) and know-your-customer (KYC) processes for Phase II. While Phase I focuses on testing technical interoperability, Phase II will involve testing business interoperability, a more complex endeavor that necessitates the creation of shared standards for the onboarding of consumers, businesses, and regulators’ trusted identities and credentials.

“A recent study estimates that 73% of all internet traffic comprises bad bots and fraud farm traffic. Generative AI increases these risks dramatically. Today, approximately 20 billion IoT-connected devices lack trusted, verifiable identities and credentials, rendering them unable to securely engage in eCommerce,” said Tram Vo, MOBI CEO and Co-Founder. “With trusted and verifiable identities and claims, IoT devices — such as vehicles and batteries — can act as autonomous economic agents to securely share and authenticate data for every transaction, ensuring regulatory compliance and a higher degree of trust throughout the entire value chain.”

“A holistic, transparent system architecture for the exchange of data in road traffic which meets the new data sovereignty needs is not available today. There are individual companies that already offer services. But these services are tailored to specific applications, vehicles, or customer groups. With its open and EU compliant digital identity architecture, moveID helps to solve that challenge in the future markets,” explains Peter Busch, project manager at Gaia-X 4 moveID consortium lead Bosch.

About MOBI

MOBI is a global nonprofit Web3 consortium. We are creating standards for trusted Self-Sovereign Data and Identities (e.g. vehicles, people, businesses, things), verifiable credentials, and cross-industry interoperability. Our goal is to make the digital economy more efficient, equitable, decentralized, and circular while preserving data privacy for users and providers alike. For additional information about joining MOBI, please visit www.dlt.mobi.

About Gaia-X

Gaia-X 4 moveID is a project of the Gaia-X 4 Future Mobility project family which is supported by the German Federal Ministry for Economic Affairs and Climate Action. The project consortium consists of 19 companies and organizations such as Robert Bosch GmbH, Continental Automotive Technologies GmbH, and DENSO AUTOMOTIVE Deutschland GmbH as well as Web3 and SSI expert companies such as 51nodes GmbH, Datarella GmbH, and Peaq Technology GmbH.

Gaia-X 4 moveID recently unveiled its latest innovations at IAA Mobility 2023 in Munich, demonstrating the transformative potential of decentralized digital identities in mobility through its Park & Charge demonstrator. At the upcoming Hannover Fair 2024, moveID, alongside the Gaia-X 4 Future Mobility project family, will spotlight groundbreaking use cases and the foundational Base-X concepts that underpin all projects within the family.

Media Contact: Grace Pulliam, MOBI Communications Manager

Email: grace@dlt.mobi | Twitter: twitter.com/dltmobi | Linkedin: MOBI

The post MOBI and Gaia-X 4 moveID Web3 Interoperability Initiative first appeared on MOBI | The New Economy of Movement.


Digital ID for Canadians

Spotlight on Fintracker

1. What is the mission and vision of Fintracker? Fintracker’s mission and vision is to reduce risk in the Canadian Real Estate industry by simplifying…

1. What is the mission and vision of Fintracker?

Fintracker’s mission and vision is to reduce risk in the Canadian Real Estate industry by simplifying Anti-Money Laundering compliance for REALTORS® with easy-to-use tools that save them time and money.

2. Why is trustworthy digital identity critical for existing and emerging markets?

Canadian Real Estate has been highlighted as a target area for criminals and fraud attacks requiring a strong response by all participants, including Canada’s 150,000 REALTORS® . Many crimes associated with Canadian real estate have been tied to potential overseas proceeds of crime or corruption. Fintracker improves the integrity of identity verification and supports the completeness of compliance records to regulatory standards needed to have robust controls to fight these threats.

3. How will digital identity transform the Canadian and global economy? How does your organization address challenges associated with this transformation?

Fintracker provides an easy, economical, digital first experience for all REALTORS® on its platform. Our services are part of their Real Estate Board membership and provide a smart and reliable method to reduce the risk of crime in the real estate sector by strengthening identification of property buyers; collecting key records to identify risk and facilitating potential reporting to regulators as needed. Fintracker’s mobile application puts their office in their hands for a true digital first experience to the leading industry standards.

4. What role does Canada have to play as a leader in this space?

Canada’s strong dedication to information security, privacy and fighting financial crime principles needs to be translated into executable standards and practices for all sectors and communities to ensure that these principles are supported in technology evolution and future networks. Fintracker is committed to playing its part in this effort for REALTORS® and the Canadian Real Estate sector.

5. Why did your organization join the DIACC?

DIACC’s members include the who’s who for delivering Canada’s secure and privacy protecting methods to identify customers and protect their information including technology providers, government agencies and leading private sector companies. Fintracker’s identity technology is already sourced from an active participant in the DIACC’s Directory of Identity Management and Proofing Products (Aligned with FINTRAC guidance and endorsed by many Law Societies). This membership is an affirmation of our commitment to be the trusted and leading provider of secure solutions for the REALTORS® community in Canada.

6. What else should we know about your organization?

Fintracker is the go-to leader in providing relevant connections and value for the REALTORS® community in Canada and is eager to engage in the right partnerships. We support over 50% of the Real Estate Boards in Canada, with over 90% of Realtors able to use our services as part of their board membership.


Identity At The Center - Podcast

Ready to deepen your understanding of IAM in the cloud? The

Ready to deepen your understanding of IAM in the cloud? The latest episode of The Identity at the Center Podcast features Kat Traxler, a seasoned security researcher from TrustOnCloud, who shares her invaluable insights on GCP, AWS, and the nuances of identity management in cloud platforms. You can hear our conversation at idacpodcast.com and in your podcast app. #iam #podcast #idac

Ready to deepen your understanding of IAM in the cloud? The latest episode of The Identity at the Center Podcast features Kat Traxler, a seasoned security researcher from TrustOnCloud, who shares her invaluable insights on GCP, AWS, and the nuances of identity management in cloud platforms. You can hear our conversation at idacpodcast.com and in your podcast app.

#iam #podcast #idac


Origin Trail

Decentralized RAG 101 with OriginTrail DKG and Google Gemini

Retrieval Augmented Generation (RAG) has established itself as a key paradigm for builders in the AI space looking to feed LLMs with a specific context and datasets. The term RAG was coined by Patrick Lewis in the Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks paper, introducing it as a technique for enhancing the accuracy and reliability of GenAI models with facts fetched from e

Retrieval Augmented Generation (RAG) has established itself as a key paradigm for builders in the AI space looking to feed LLMs with a specific context and datasets. The term RAG was coined by Patrick Lewis in the Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks paper, introducing it as a technique for enhancing the accuracy and reliability of GenAI models with facts fetched from external sources. It allows AI solutions to dynamically fetch relevant information before the generation process, enhancing the accuracy of responses by limiting the generation to re-working the retrieved inputs.

As a growing number of AI systems are driven to utilize RAG, their builders are creating and curating valuable knowledge bases for their RAG pipelines. This opens up a tremendous opportunity for connecting these individual knowledge bases, enabling the sharing of knowledge and value between AI systems in a decentralized way. The opportunity is analogous to those previously seized by advances in networking, with early computer networks such as Ethernet and the Internet brought about, which have demonstrated tremendous value generation through network effects famously articulated in Metcalfe’s law.

This vision, embodied in the concept of the Verifiable Internet for AI has been articulated in the recent white paper. It is centered around the paradigm of Decentralized Retrieval Augmented Generation (dRAG) as a way to publish and retrieve structured knowledge from multiple sources for AI systems. DRAG focuses on the discoverability of knowledge across knowledge bases, cryptographic verifiability of data, and maintaining ownership of knowledge assets with user-defined access control.

In this blog post we will showcase how to implement a basic Decentralized Retrieval Augmented Generation system using Google Gemini and OriginTrail Decentralized Knowledge Graph. This approach was recently showcased at the Google + OriginTrail meetup in Google Amsterdam offices, with the full recording available below, including a special appearance from Dr Bob Metcalfe himself who joined to offer his perspective.

Enter Decentralized Retrieval Augmented Generation (dRAG)

The dRAG framework advances the RAG approach by leveraging the Decentralized Knowledge Graph (DKG), organizing knowledge in a network of Knowledge Assets. Each Knowledge Asset contains graph data and/or vector embeddings, immutability proofs, a Decentralized Identifier (DID), and the ownership NFT. When connected in the permission-less DKG, the following capabilities are enabled:

Knowledge Graphs — structural knowledge in knowledge graphs allows a hybrid of neural and symbolic AI methodologies, enhancing the GenAI models with deterministic inputs. Ownership — dRAG uses input from Knowledge Assets that have an owner that can manage access to the data contained in the Knowledge Asset. Verifiability — every piece of knowledge on the DKG has cryptographic proofs published ensuring that no tampering has occurred since it was published. dRAG framework Application architecture DRAG with Google Gemini and OriginTrail DKG

Today we will focus our DRAG example on art assets, using Google Gemini via the Google Vertex platform. We will query data from the OriginTrail Decentralized Knowledge Graph (DKG), which contains easily discoverable and verifiable Knowledge Assets. Each Knowledge Asset contains graph data, immutability proofs, a Decentralized Identifier (DID), and the NFT, which means that you can track the complete history of a Knowledge Asset on the blockchain. Once we retrieve relevant knowledge assets from the DKG, we will feed them to Google Gemini LLM to generate a response.

Prerequisites A GCP/Google Vertex AI account. Access to an OriginTrail DKG node. Please visit the official docs to learn how to run one. A Python project with a virtual environment set up. Step 1 — Setting up a Python Project

In this step, assuming you have an empty Python project ready, as well as an empty Google Cloud project, you’ll install the necessary packages using pip and set up the credentials for Google Cloud.

Navigate to your Python project’s environment and run the following command to install the packages:

pip install dkg requests google-cloud-aiplatform python-dotenv

You’ll then need to authenticate with Google Cloud. Run the following commands in your shell, replacing the value with your project ID:

gcloud config set project "your-project-goes-here"
gcloud auth application-default login
gcloud init

You will be required to choose the Google Vertex project you want to use and authenticate that you do have the required permissions in it. The roles you need are:

AI Platform Admin Vertex AI Administrator

You can now move on to setting up dkg.py, the Python SDK for connecting to the OriginTrail Decentralized Knowledge Graph.

Step 2 — Connecting to the DKG using dkg.py

In this step, you’ll set up the environment variables which will hold the necessary keys for connecting to the OriginTrail DKG using dkg.py. Then, you’ll connect to the DKG and print the version of the node you’re connected to.

Create a.env file and add the following lines:

JWT_TOKEN="your_jwt_token"
OT_NODE_HOSTNAME="your_ot_node_hostname"
PRIVATE_KEY="your_private_key"

The JWT_TOKEN is used to authenticate to your DKG node, the OT_NODE_HOSTNAME is the API endpoint for the node, and the PRIVATE_KEY represents the private key of a blockchain address that is equipped with TRAC tokens and appropriate gas tokens (NEURO for Neuroweb, xDAI for Gnosis, etc). For more information on how to obtain tokens, refer to the documentation.

Replace the values with your own, which you can find in the configuration file of your OT Node, as well as your wallet’s private key in order to publish knowledge assets. Keep in mind that this file should be kept private as it contains private keys. When you’re done, save and close the file.

Then, create a Python file and add the following code to connect to the DKG:

from dkg import DKG
from dkg.providers import BlockchainProvider, NodeHTTPProvider
from dotenv import load_dotenv
import os
import json

dotenv_path = './.env' # Update if placed somewhere else
load_dotenv(dotenv_path)
jwt_token = os.getenv('JWT_TOKEN')
ot_node_hostname = os.getenv('OT_NODE_HOSTNAME')
private_key = os.getenv('PRIVATE_KEY')

node_provider = NodeHTTPProvider(ot_node_hostname, jwt_token)
blockchain_provider = BlockchainProvider("mainnet", "otp:2043", private_key=private_key)

dkg = DKG(node_provider, blockchain_provider)
print(dkg.node.info)

Here, you first import the required classes and packages. Then, you load the values from .env and instantiate a NodeHTTPProvider and BlockchainProvider with those values, which you pass into the DKG constructor, creating the dkg object for communicating with the graph.

If all credentials and values are correct, the output will show you the version that your OT Node is running on:

{'version': '6.2.3'}

If you see such a version response, that means you have successfully connected to the DKG!

Step 3 — Making a retrieval query with Gemini

For this dRAG example we will use the Google Gemini LLM to generate a SPARQL query for retrieving relevant knowledge from the DKG. SPARQL is a standardized query language for knowledge graphs and is very similar to SQL, and you can use it to query connected public data across all the nodes on the DKG. Just like SQL, it has a SELECT and a WHERE clause, so as long as you’re familiar with SQL you should be able to understand the structure of the queries pretty well.

The data that you’ll be querying is related to artworks, stored in the DKG as Knowledge Assets. Each Knowledge Asset contains information such as name, description, artform, and author.

First, you’ll need to instruct the Google Gemini LLM on what to do:

instruction_message = '''
I am working on a project involving artworks and their related data. I have a schema in JSON-LD format that outlines the structure and relationships of the data I am dealing with. Based on this schema, I need to construct a SPARQL query to retrieve specific information from a dataset that follows this schema.

The schema is focused on artworks and includes various properties such as the artist, description, artform and author among others. My goal with the SPARQL queries is to retrieve data from the graph about the artworks, based on the natural language question that the user posed.

Here's an example of an artwork the JSON-LD format: {
"@context": "http://schema.org",
"@type": "VisualArtwork",
"@id": "https://origintrail.io/images/otworld/1fc7cb79f299ee4.jpg",
"name": "The Last Supper",
"description": "The Last Supper is an iconic Renaissance fresco by Leonardo Da Vinci.",
"artform": "Painting",
"author": {
"@type": "Person",
"name": "Leonardo da Vinci"
},
"image": "https://origintrail.io/images/otworld/1fc7cb79f299ee4.jpg",
"keywords": [
"The Last Supper",
"Leonardo da Vinci",
"Renaissance",
"fresco",
"religious art"
],
"publisher": {
"@type": "Person",
"name": "dkgbrka"
}
}

Here's an example of a query to find artworks from publisher "BranaRakic":

```sparql
PREFIX schema: <http://schema.org/>

SELECT ?artwork ?name ?ual

WHERE { ?artwork a schema:VisualArtwork ;
GRAPH ?g
{ ?artwork schema:publisher/schema:name "BranaRakic" ; schema:name ?name . }

?ual schema:assertion ?g
FILTER(CONTAINS(str(?ual), "2043")) }```

Pay attention to retrieving the UAL, this is a mandatory step of all your queries. After getting the artwork with '?artwork a schema:VisualArtwork ;' you should wrap the next conditions around GRAPH ?g { }, and later use the graph retrieved (g) to get the UAL like in the example above.

Make sure you ALWAYS retrieve the UAL no matter what the user asks for and filter whether it contains "2043".

Make sure you ONLY return the SPARQL query without any extra output.

If you understood the assignment, say 'Yes' and I will proceed with a natural language question which you should convert to a SPARQL query.'''

instruction_understood_message = "Yes."

The instruction_message prompt contains the instructions in natural language. We here provide the model with an expected schema of an artwork object (in JSON-LD notation, based on Schema.org) and an example SPARQL query. We also order it to pay attention to the examples and to return nothing else except the SPARQL query. Feel free to try out other examples of queries on your own and apply filtering on any property, including the identity of the owner of the knowledge asset, it’s publisher, etc

You can now define the chat history to let Gemini know that the instructions precede the actual prompts:

from vertexai.preview.generative_models import GenerativeModel, ChatSession, Content, Part, GenerationConfig

chat_history = [
Content(parts=[Part.from_text(instruction_message)], role="user"),
Content(parts=[Part.from_text(instruction_understood_message)], role="model"),
]

Then, instantiate the Gemini model with the chat history. Temperature is set to 0 to reduce the creativity of the LLM model to minimize hallucination:

def get_chat_response(chat: ChatSession, prompt: str) -> str:
response = chat.send_message(prompt, generation_config=GenerationConfig(temperature=0))
print(response)

return response.candidates[0].content.parts[0].text

def clean_sparql_query(input_string):
if input_string.startswith("```sparql") and input_string.endswith("```"):
cleaned_query = input_string[9:-3].strip()
return cleaned_query
elif input_string.startswith("```") and input_string.endswith("```"):
cleaned_query = input_string[3:-3].strip()
else:
return input_string

gemini_pro_model = GenerativeModel("gemini-1.0-pro-001", generation_config=GenerationConfig(temperature=0))
chat = gemini_pro_model.start_chat(history=chat_history)

The clean_sparql_query() function will remove erroneous backticks that may be returned in the result.

You can now generate SPARQL for searching the DKG using natural language prompts:

question = "Provide me with all the artworks published by Google Demo Amsterdam"
print(get_chat_response(chat, question))

query = clean_sparql_query(get_chat_response(chat, question))
print(query)

Now that you have a query, you can get results from the DKG. An example query that would be returned for the shown prompt looks like this:

PREFIX schema: <http://schema.org/>

SELECT ?artwork ?name ?ual
WHERE { ?artwork a schema:VisualArtwork ;
GRAPH ?g
{ ?artwork schema:publisher/schema:name "Google Demo Amsterdam" ; schema:name ?name . }

?ual schema:assertion ?g
FILTER(CONTAINS(str(?ual), "2043")) } Step 4 — Retrieval from the DKG with the generated query

Querying the DKG is very easy with SPARQL. You only need to specify the query and the repository to search:

query_result = dkg.graph.query(query, "privateCurrent")
print(query_result)

For completeness we use the privateCurrent repository, as it ensures that the SPARQL query retrieves both the public and private data (if any is present on our node) from Knowledge assets in the DKG.

An example result for the above query, which is looking for artworks published in the DKG by “Google Demo Amsterdam” publisher, looks like this:

[{
'artwork': 'https://i.gyazo.com/a59c65f0a0dde03314d6ebcedec008cb.jpg',
'ual': 'did:dkg:otp:2043/0x5cac41237127f94c2d21dae0b14bfefa99880630/4606152',
'name': '"NeuroWeb Logo"'
}, {
'artwork': 'https://cryptologos.cc/logos/origintrail-trac-logo.png',
'ual': 'did:dkg:otp:2043/0x5cac41237127f94c2d21dae0b14bfefa99880630/4606876',
'name': '"OriginTrail Logo"'
}, {
'artwork': 'https://i.gyazo.com/72b6cd16e0d5b2e131b0311456dcdefc.png',
'ual': 'did:dkg:otp:2043/0x5cac41237127f94c2d21dae0b14bfefa99880630/4608743',
'name': '"Decentralized Hexagon"'
}]

Each of the entries above is a Knowledge asset with its UAL (Uniform Asset Locator), which presents its unique, dereferencable address in the Decentralized Knowledge Graph. These knowledge assets can be crowdsourced from different individual knowledge bases — effectively querying the DKG is equivalent to executing a search over the different data sources (e.g. RAG backends).

Step 5 — Augmented Generation with Gemini

We will now feed the extracted knowledge assets to Gemini for answering our questions, providing it with the data about artworks that you’ve queried from the DKG. First, preprocess the data so that Gemini understands it easier:

formatted_results = "\n".join([f"- Title: {artwork['name']}, UAL: {artwork['ual']}" for artwork in query_result])

Then, define the prompt which is asking the model to answer artwork-related questions based on the knowledge you passed in:

prompt = (
f"I have retrieved the following information from the Decentralized Knowledge Graph based on the query '{query}':\n"
f"{formatted_results}\n\n"
"Imagine you're guiding a tour through a virtual gallery featuring some of the most iconic artworks linked to detailed records in the Decentralized Knowledge Graph.\n"
"As you introduce these artworks to the audience, delve into the stories behind them. What inspired these pieces? How do they reflect the emotions and techniques of the artist?\n"
f"Question: {question}\n"
"Answer:"
)

Finally, run the prompt and get back the answer:

llm_response = gemini_pro_model.generate_content(prompt)
print(llm_response)

Gemini’s response will look similar to this:

candidates {
content {
role: "model"
parts {
text: "**Artwork 1:**\n\n* **Title:** NeuroWeb Logo\n* **UAL:** did:dkg:otp:2043/0x5cac41237127f94c2d21dae0b14bfefa99880630/4606152\n\nThis striking logo captures the essence of NeuroWeb, a cutting-edge platform that harnesses the power of artificial intelligence to revolutionize the way we interact with the digital world. The vibrant colors and intricate design evoke a sense of innovation and boundless possibilities.\n\n**Artwork 2:**\n\n* **Title:** OriginTrail Logo\n* **UAL:** did:dkg:otp:2043/0x5cac41237127f94c2d21dae0b14bfefa99880630/4606876\n\nOriginTrail\'s logo is a testament to the company\'s mission of bringing transparency and traceability to global supply chains. The interlocking circles symbolize the interconnectedness of the world, while the vibrant green hue represents growth and sustainability.\n\n**Artwork 3:**\n\n* **Title:** Decentralized Hexagon\n* **UAL:** did:dkg:otp:2043/0x5cac41237127f94c2d21dae0b14bfefa99880630/4608743\n\nThis abstract artwork embodies the spirit of decentralization, a fundamental principle of the blockchain revolution. The hexagonal shape represents the interconnectedness of nodes in a decentralized network, while the vibrant colors evoke the diversity and resilience of the community."
}
}
finish_reason: STOP
safety_ratings {
category: HARM_CATEGORY_HATE_SPEECH
probability: NEGLIGIBLE
}
safety_ratings {
category: HARM_CATEGORY_DANGEROUS_CONTENT
probability: NEGLIGIBLE
}
safety_ratings {
category: HARM_CATEGORY_HARASSMENT
probability: NEGLIGIBLE
}
safety_ratings {
category: HARM_CATEGORY_SEXUALLY_EXPLICIT
probability: NEGLIGIBLE
}
}
usage_metadata {
prompt_token_count: 384
candidates_token_count: 364
total_token_count: 748
}

The text section contains the actual answer, while the usage_metadata part reveals how many tokens were used for generating the answer.

Using this dRAG code snippet, you could build a full-stack chat bot application which relies on the trustable data verified on the DKG. Below is an example of such an application UI, similar to the one found on OriginTrail World.

In the example above each answer corresponds to a specific source art Knowledge asset, published to the OriginTrail DKG. And as it is a constantly growing, Decentralized Knowledge Graph of signed knowledge assets, you can leverage all of this constantly growing knowledge in your dRAG applications.

Conclusion

We’ve showcased a basic dRAG implementation today — you’ve created Knowledge Assets on the OriginTrail DKG and queried it by converting Natural Language queries into SPARQL assisted by Google Gemini. Find the above code here, and let us know your comments in our Discord channel.

Decentralized RAG 101 with OriginTrail DKG and Google Gemini was originally published in OriginTrail on Medium, where people are continuing the conversation by highlighting and responding to this story.


ResofWorld

African tech companies are ditching Google for a small Indian competitor

Zoho has positioned itself as a cheaper alternative to Google and Microsoft, attracting the attention of African startup founders.
When Nigerian edtech startup, Flexisaf, decided to cut costs earlier this year, it realized it needed to reduce its spending on technology. One of the company’s biggest costs was the...

Friday, 19. April 2024

GS1

GS1 presents at the ZERO Construct event - Don't miss out!

GS1 presents at the ZERO Construct event - Don't miss out! daniela.duarte… Fri, 04/19/2024 - 16:46 GS1 presents at the ZERO Construct event - Don't miss out! 19 April 2024 Register now for the ZERO Construct vi
GS1 presents at the ZERO Construct event - Don't miss out! daniela.duarte… Fri, 04/19/2024 - 16:46 GS1 presents at the ZERO Construct event - Don't miss out! 19 April 2024 Register now for the ZERO Construct virtual event.

ZERO Con Virtual is the premier global event hosted by ZERO Construction. As pioneers in sustainability within the construction industry, we’re thrilled to invite you to a transformative 24-hour experience designed to drive change, foster collaboration, and inspire action.

ZERO is building a global community that raises awareness, shares knowledge and empowers its members to meet our vision of a zero-carbon construction industry. Join us at ZERO Con Virtual, where we’re redefining the future of construction through innovation, collaboration, and sustainability. Over 24 hours, connect with industry leaders, experts, and like-minded professionals from across the globe. Gain exclusive insights into groundbreaking projects, emerging technologies, and best practices for reducing carbon emissions and promoting sustainable development.

Key details to note:

The event is open and free for all participants ZERO Construct is a virtual, 24-hour global event GS1 is scheduled to present at 12:00 (CET) during the EMEA session Review the agenda HERE

Register today and join live! Share with your industry stakeholders!

Register now!


ResofWorld

Indonesia taps influencers to convince people to move to its new, under-construction capital

Social media stars are downplaying fears of deforestation and boredom in Nusantara.
Four years after Indonesian President Joko Widodo announced that he would move the nation’s capital from the main island of Java to Borneo, he led a tour of dozens of...

MyData

Calling on the European Commission to Enforce Data Portability

On April 19, MyData Global has partnered with the Coalition for Online Data Empowerment (CODE) and the Ethical Commerce Alliance (ECA) to send an open letter to the European Commission’s Executive Vice-President for A Europe Fit for the Digital Age and Competition, calling for action to enforce the data portability requirements of the EU’s Digital […]
On April 19, MyData Global has partnered with the Coalition for Online Data Empowerment (CODE) and the Ethical Commerce Alliance (ECA) to send an open letter to the European Commission’s Executive Vice-President for A Europe Fit for the Digital Age and Competition, calling for action to enforce the data portability requirements of the EU’s Digital […]

Velocity Network

India Blockchain Alliance to champion the deployment of Velocity Network in India to accelerate local job market into the future.

India Blockchain Alliance (IBA) is a not-for-profit organization that promotes evidence-based adoption of Blockchain and Distributed Ledger Technologies (DLT) across the public and private sectors in India. The post India Blockchain Alliance to champion the deployment of Velocity Network in India to accelerate local job market into the future. appeared first on Velocity.

Thursday, 18. April 2024

ResofWorld

The regional flavors of labor-on-demand

A new report digs into the social dynamics of gig work.
Gig work has become universal, used for everything from remote copywriting in South Africa to cleaning homes in Vietnam. It’s so widespread that it’s easy to forget that, for many...

Bangladesh built a tech park for 100,000 workers. Now it’s a ghost town

The facility was partly funded by the World Bank and touted to become the country’s “cyber capital.”
When Bangladesh inaugurated its first technology business park — a sprawling campus for tech companies to set up offices and factories — in 2015, local computer manufacturer DataSoft swiftly seized...

Wednesday, 17. April 2024

Oasis Open Projects

Invitation to comment on Energy Interoperation Common Transactive Services (CTS) v1.0 – ends June 15

CTS permits energy consumers and producers to interact through energy markets. The post Invitation to comment on Energy Interoperation Common Transactive Services (CTS) v1.0 – ends June 15 appeared first on OASIS Open.

OASIS and the OASIS Energy Interoperation TC are pleased to announce that Energy Interoperation Common Transactive Services (CTS) v1.0 is now available for public review and comment. This is the third public review of this draft specification.

Common Transactive Services (CTS) permits energy consumers and producers to interact through energy markets by simplifying actor interaction with any market. CTS is a streamlined and simplified profile of the OASIS Energy Interoperation (EI) specification, which describes an information and communication model to coordinate the exchange of energy between any two Parties that consume or supply energy, such as energy suppliers and customers, markets and service providers.

The documents and related files are available here:

Energy Interoperation Common Transactive Services (CTS) Version 1.0
Committee Specification Draft 03
28 March 2024

PDF (Authoritative):
https://docs.oasis-open.org/energyinterop/ei-cts/v1.0/csd03/ei-cts-v1.0-csd03.pdf
Editable source:
https://docs.oasis-open.org/energyinterop/ei-cts/v1.0/csd03/ei-cts-v1.0-csd03.docx
HTML:
https://docs.oasis-open.org/energyinterop/ei-cts/v1.0/csd03/ei-cts-v1.0-csd03.html
PDF marked with changes since previous publication:
https://docs.oasis-open.org/energyinterop/ei-cts/v1.0/csd03/ei-cts-v1.0-csd03-DIFF.pdf
Comment resolution log for previous public review:
https://docs.oasis-open.org/energyinterop/ei-cts/v1.0/csd02/ei-cts-v1.0-csd02-comment-resolution-log.txt

For your convenience, OASIS provides a complete package of the specification document and any related files in ZIP distribution files. You can download the ZIP file at:
https://docs.oasis-open.org/energyinterop/ei-cts/v1.0/csd03/ei-cts-v1.0-csd03.zip

A public review metadata record documenting this and any previous public reviews is available at:
https://docs.oasis-open.org/energyinterop/ei-cts/v1.0/csd03/ei-cts-v1.0-csd03-public-review-metadata.html

How to Provide Feedback

OASIS and the Energy Interoperation TC value your feedback. We solicit input from developers, users and others, whether OASIS members or not, for the sake of improving the interoperability and quality of our technical work.

The public review starts 17 April 2024 at 00:00 UTC and ends 15 June 2024 at 23:59 UTC.

The TC requests that comments should cite the line numbers from the PDF formatted version for clarity.

Any individual may submit comments to the TC by sending email to Technical-Committee-Comments@oasis-open.org. Please use a Subject line like “Comment on Energy Interoperation Common Transactive Services”.

All comments submitted to OASIS are subject to the OASIS Feedback License, which ensures that the feedback you provide carries the same obligations at least as the obligations of the TC members. In connection with this public review, we call your attention to the OASIS IPR Policy [1] applicable especially [2] to the work of this technical committee. All members of the TC should be familiar with this document, which may create obligations regarding the disclosure and availability of a member’s patent, copyright, trademark and license rights that read on an approved OASIS specification.

OASIS invites any persons who know of any such claims to disclose these if they may be essential to the implementation of the above specification, so that notice of them may be posted to the notice page for this TC’s work.

Additional information about the specification and the Energy Interoperation TC can be found at the TC’s public home page:
https://www.oasis-open.org/committees/energyinterop/

========== Additional references:

[1] https://www.oasis-open.org/policies-guidelines/ipr/

[2] https://www.oasis-open.org/committees/energyinterop/ipr.php
https://www.oasis-open.org/policies-guidelines/ipr/#RF-on-Limited-Mode
RF on Limited Terms Mode

The post Invitation to comment on Energy Interoperation Common Transactive Services (CTS) v1.0 – ends June 15 appeared first on OASIS Open.


MOBI

NIADA

The National Independent Automobile Dealers Association (NIADA) has advanced independent automobile dealers since 1946 through advocacy, education and promotion. NIADA advocates for used auto dealers by addressing the challenging issues that disrupt the industry’s ability to create jobs, build thriving dealerships and maximize profitability. www.niada.com

The National Independent Automobile Dealers Association (NIADA) has advanced independent automobile dealers since 1946 through advocacy, education and promotion. NIADA advocates for used auto dealers by addressing the challenging issues that disrupt the industry’s ability to create jobs, build thriving dealerships and maximize profitability. www.niada.com

The post NIADA first appeared on MOBI | The New Economy of Movement.


NAF Association

The National Automotive Finance (NAF) Association is the only forum for the exclusive benefit of the non-prime auto finance industry, addressing the challenges of sales finance companies, dealers, and third-party service providers. nafassociation.com

The National Automotive Finance (NAF) Association is the only forum for the exclusive benefit of the non-prime auto finance industry, addressing the challenges of sales finance companies, dealers, and third-party service providers. nafassociation.com

The post NAF Association first appeared on MOBI | The New Economy of Movement.


Identity At The Center - Podcast

Join us for a new Sponsor Spotlight episode of The Identity

Join us for a new Sponsor Spotlight episode of The Identity at the Center Podcast. Sandy Bird, co-founder and CTO of Sonrai Security, returns to introduce us to permissions on demand by way of Sonrai’s new Cloud Permissions Firewall. Learn more about it at https://sonrai.co/identity-at-the-center and listen to the episode now at idacpodcast.com or on your favorite podcast app. We also tried somet

Join us for a new Sponsor Spotlight episode of The Identity at the Center Podcast. Sandy Bird, co-founder and CTO of Sonrai Security, returns to introduce us to permissions on demand by way of Sonrai’s new Cloud Permissions Firewall. Learn more about it at https://sonrai.co/identity-at-the-center and listen to the episode now at idacpodcast.com or on your favorite podcast app.

We also tried something new for this episode... video! You can watch this episode on our YouTube channel at https://www.youtube.com/watch?v=oPlUwY4jqKg

#iam #podcast #idac


ResofWorld

How RRR’s success brought a wave of Telugu-language movies to Netflix

Netflix's U.S. catalog has more content in Telugu than in German, Russian, or any dialect of Chinese.
Since its release in March 2022, the Telugu-language film RRR has become one of the most celebrated Indian films in recent memory. The action epic garnered international acclaim and even...

Next Level Supply Chain Podcast with GS1

How 2D Barcodes Are Changing the Retail Landscape with Chuck Lasley

Chuck Lasley, IT Director at Dillard’s, explains the pivotal role of 2D barcodes in retail innovation, illustrating Dillard's strategy of incorporating these versatile codes into their products, which range from apparel to accessories. Amidst the growing demand for intricate product details, Chuck emphasizes the imperative for sales associates to be adept in product knowledge facilitated by 2D bar

Chuck Lasley, IT Director at Dillard’s, explains the pivotal role of 2D barcodes in retail innovation, illustrating Dillard's strategy of incorporating these versatile codes into their products, which range from apparel to accessories. Amidst the growing demand for intricate product details, Chuck emphasizes the imperative for sales associates to be adept in product knowledge facilitated by 2D barcodes. As Chuck explains, 2D barcodes can lead to improved inventory management, better customer service, and enhanced consumer storytelling possibilities.

The conversation also explores AI's potential in customer service, the impact smartphones have had on computing power, and the potential of automated vehicles in altering supply chain dynamics. Chuck applauds the implementation of evolving technologies like RFID, which are crucial in the industry-wide 'Sunrise 2027' initiative. Sunrise 2027 aims for widespread adoption of 2D barcode scanning by 2027, with Dillards ambitiously targeting an earlier date. This episode covers automation, innovation, and the pursuit of a unique identity within the global supply chain.

 

Key takeaways: 

Technology in customer service is advancing with tools integrating RFID and 2D barcode technologies in supply chain operations to improve accuracy and efficiency.

The retail industry recognizes the importance and advantages of transitioning from 1D to 2D barcodes and RFID technology for improved inventory management, customer service, and access to detailed product information.  

Technological advancements create enriched consumer experiences through unique transaction identifiers and product storytelling.

 

Resources: 

Learn More About 2D Barcodes

Resources for the Transition from 1D to 2D Barcodes 

Behind the Barcode: Mastering 2D Barcodes with GS1 US

 

Connect with GS1 US:

Our website - www.gs1us.org

GS1 US on LinkedIn

 

Connect with guest:

Chuck Lasley on LinkedIn

Check out Dillard’s

Tuesday, 16. April 2024

Origin Trail

V8 roadmap update: Scalable knowledge engine for convergence of crypto, internet, and AI

The upcoming Decentralized Knowledge Graph (DKG) V8 update represents a significant advancement in Decentralized AI, building on the achievements of previous innovations brought by V6. The DKG V6 materialized knowledge as a new asset class, with its core AI-ready Knowledge Assets setting the stage for advanced AI applications in the domains of real-world assets (RWAs), decentralized science (DeSci

The upcoming Decentralized Knowledge Graph (DKG) V8 update represents a significant advancement in Decentralized AI, building on the achievements of previous innovations brought by V6. The DKG V6 materialized knowledge as a new asset class, with its core AI-ready Knowledge Assets setting the stage for advanced AI applications in the domains of real-world assets (RWAs), decentralized science (DeSci), industry 4.0, and more.

Moving forward, DKG V8 introduces autonomous DKG growth, support for Initial Paranet Offerings (IPOs), and also significantly increases scalability. With this, the Decentralized Retrieval Augmented Generation (dRAG) becomes a foundational framework instilled in the DKG V8, significantly advancing a spectrum of large language model (LLM) applications.

DKG V8 is tailored to drive the next generation of AI through multi-modal content, which is crucial for a diversified and robust AI ecosystem. The integration of dRAG and other decentralized AI functionalities allows for a more verifiable and secure application of AI technologies, addressing challenges such as misinformation, data bias, and model collapse.

The present roadmap update focuses on DKG V8 catalysts designed to bootstrap and accelerate these advancements, including enhanced knowledge mining processes, integration across multiple blockchain ecosystems, and scalability improvements aimed at supporting an expansive growth of knowledge assets. These initiatives ensure that DKG V8 not only extends its foundational network effects but also reinforces its position as a cornerstone of future AI developments.

The entire roadmap can be found here.

DKG V8 — Decentralized Retrieval Augmented Generation at scale

“…it (LLM) unfortunately hallucinates most when you least want it to hallucinate. When you’re asking the important and difficult questions that’s where it tends to be confidently wrong. So we’re really trying hard to say how do we be as grounded as possible so you can count on the results?” Elon Musk at Lex Fridman podcast

The truth, however, is an elusive concept, especially if attempted to be captured by a single organization/product. A better approach is through connectivity and transparency achieved by leveraging multiple open-source technologies. Turing Award winner Dr. Bob Metcalfe explained this idea, saying,

“… through connectivity, decentralized knowledge graphs, blockchains and AI are converging — and it’s an important convergence, because it is going to help us with one of the biggest problems we have nowadays, which is the truth.”

Inspired by Turing award winner Dr Bob Metcalfe and his pioneering work, including “Metcalfe’s Law” and the creation of the first computer network, the OriginTrail Metcalfe phase aims to leverage network effects through building a web of verifiable Knowledge Assets for decentralized AI.

The Genesis period of the Metcalfe phase bootstraps the growth of the AI-native V8 Decentralized Knowledge Graph (DKG V8), driving the verifiability of AI via Decentralized Retrieval Augmented Generation (dRAG). Supported by a unique knowledge mining system through the NeuroWeb blockchain, the Genesis period is followed by a “Convergence” period, which further leverages network effects via autonomous knowledge inferencing. The DKG V8 aims to offer a user-centric, trusted knowledge foundation with decentralized AI functionalities, enabling individuals and organizations to participate in a knowledge economy based on neutrality, inclusiveness, and usability — the core principles of the OriginTrail ecosystem.

Genesis — The V8 Foundation (Q4 2023–2025)

“When you connect things together, the value rises really fast because of all the possible connections that can be made, and the friction that’s reduced, and the collaboration that’s enhanced. So it’s good to bet on connectivity.” — Dr Bob Metcalfe

The Genesis period leverages connectivity to achieve network effects across the multi-chain OriginTrail Decentralized Knowledge Graph to reach a growth target of 1B Knowledge Assets. With the introduction of the community-driven NeuroWeb blockchain supporting DKG growth, Genesis bootstraps the OriginTrail AI-Native version 8 for Decentralized Retrieval Augmented Generation (dRAG), introduced to drive a multi-modal ecosystem of AI solutions. The V8 Foundation impact stages were initially described here and are now being expanded.

Genesis period targets:

1B knowledge assets available on the DKG

Minimum 40% of TRAC circulating supply activated for utility

TRAC locked for network security: 100MM+ TRAC Average security collateral per node: 300k+ TRAC Impact base: Trantor (established in Q1 2024)

One of the prominent features of Trantor was the Library of Trantor, in which librarians indexed the entirety of human knowledge by walking up to a different computer terminal every day and resuming where the previous librarian left off.

Catalyst 1: Knowledge Mining

Incentivized growth of high-quality knowledge in the DKG with Initial Paranet Offerings and Autonomous Knowledge Mining.

Genesis Knowledge Mining RFC
Genesis Knowledge Asset mining
Beta Mining Program

Catalyst 2: Delegated staking

Expanding inclusivity of the DKG infrastructure by enabling TRAC stake delegation across all integrated chains.

Delegated staking dashboard
Documentation
Delegated Staking RFC
DKG node TRAC token delegation release (Gnosis integration)

Whitepaper 3.0

Verifiable Internet for Artificial Intelligence: The Convergence of Crypto, Internet, and AI.

Link

This whitepaper presents a vision for the future of Artificial Intelligence through the concept of a Verifiable Internet for AI, leveraging synergies of crypto, internet, and AI technologies. It introduces the Decentralized Knowledge Graph (DKG) and Decentralized Retrieval Augmented Generation (dRAG) approach to ensure the provenance, integrity, and verifiability of information utilized by AI. It aims to address the challenges posed by misinformation, data ownership, and bias inherent in AI, by synergizing neural and symbolic AI approaches with Web3 technologies.

Impact base: Terminus (established in Q2 2024)

The founding population of Terminus consisted of 100,000 especially healthy scientists, whose ostensible purpose was to publish an Encyclopedia Galactica in order to preserve science and technology. The lack of natural resources forced Terminians to develop extremely high-efficiency tech, as their knowledge due to their position as the inheritors of the Imperial Library allowed them to do so.

Catalyst 1: Multichain growth

Bringing the DKG to any EVM-compatible ecosystem with significant demand (more information in OT-RFC-17).

◻️ NeuroWeb delegated staking release
◻️ Additional blockchain integrations (based on OT-RFC-17)

Catalyst 2: 100x scalability

Increasing scalability in the capacity of publishing Knowledge Assets by implementing random sampling and other scalability improvements.

◻️ NeuroWeb scaling: Asynchronous backing
◻️ DKG V8 random sampling update

Catalyst 3: Paranets and Initial Paranet Offerings (IPOs)

The autonomously operated collections of Knowledge Assets owned by its communities residing on the DKG.

◻️ Initial Paranet Offerings (IPOs) launch
◻️ First IPO launched — the ID Theory decentralized Science (DeSci)
◻️ Cross-chain knowledge mining
◻️ Decentralized Identities on the DKG — name service integration

Catalyst 4: ChatDKG.ai

Interact with the DKG and its paranets using natural language and the power of multiple AI models and agents. Build your own Decentralized Retrieval Augmented Generation (dRAG) product seamlessly.

◻️ Multi modal LLM ChatDKG
OriginTrail World Launch Trusted AI platform
Google Vertex AI support
OpenAI support
NVIDIA Platform support
Chainlink support
◻️ xAI (Grok) support
V1 of unified framework (Whitepaper 3.0)
AI-based knowledge publishing
◻️ AI agent integrations
✅ Additional ChatDKG grant waves

Impact base: Gaia (established in H2 2024)

The human beings on Gaia, under robotic guidance, not only evolved their ability to form an ongoing telepathic group consciousness but also extended this consciousness to the fauna and flora of the planet itself, even including inanimate matter. As a result, the entire planet became a super-organism.

DKG V8

Scalable and robust foundation for enabling the next stage of Artificial Intelligence adoption with Decentralized Retrieval Augmented Generation (dRAG), combining symbolic and neural decentralized AI.

◻️ AI-native Knowledge Assets: native vector support (e.g. knowledge graph embeddings for native Graph ML)
◻️ AI-native search based on DKG V8 decentralized vector index
◻️ Knowledge Contracts

Catalyst 1: Autonomous knowledge mining

Mine new knowledge for paranets autonomously by using the power of symbolic AI (the DKG) and neural networks.

◻️ AI-agent driven knowledge mining
◻️ Knowledge mining library integrations for popular data science languages (e.g. Python, R etc)

Catalyst 2: DePIN for private knowledge

Keep your knowledge private, on your devices, while being able to use it in the bleeding edge AI solutions.

◻️ Private Knowledge Assets repository (Knowledge Wallet)
◻️ Private data monetization with Knowledge Assets (Knowledge Marketplace)
◻️ DKG Decentralized File Storage integration libraries (e.g. IPFS, Filecoin)

V8 roadmap update: Scalable knowledge engine for convergence of crypto, internet, and AI was originally published in OriginTrail on Medium, where people are continuing the conversation by highlighting and responding to this story.


Blockchain Commons

2024 Q1 Blockchain Commons Report

In the first quarter of 2024, Blockchain Commons continued its work on specifications, updated some of its references, and did new research on constrained devices, provenance, and identity. Specifications dCBOR Hashed Data Elision FROST Specification Docs Multipart UR Implementation Guide Request & Response Implementation Guide Updated Research List Reference Releases Gordian SeedTool 1.6 Gordi

In the first quarter of 2024, Blockchain Commons continued its work on specifications, updated some of its references, and did new research on constrained devices, provenance, and identity.

Specifications

dCBOR Hashed Data Elision FROST

Specification Docs

Multipart UR Implementation Guide Request & Response Implementation Guide Updated Research List

Reference Releases

Gordian SeedTool 1.6 Gordian Server 1.1 Rust Crates Updates

Constrained Devices Research

JavaCards no_std in Rust

Provenance Research

C2PA Source Code Provenance

Identity Research

eIDAS Dangers Identity Dangers

The Future

Specifications

One of Blockchain Commons biggest priorities is producing interoperable specifications that can be used by principals in the digital asset & identity field to create apps and hardware devices that support independence, privacy, resilience, and openness for users. In Q1, we worked to advance some of our specifications into true standards and also interacted with standards being developed by the rest of the field.

dCBOR. Our Internet-Draft for Decentralized CBOR (dCBOR) went through drafts 6, 7, and 8 this quarter. CBOR expert Carten Bormann joined us as a co-author and we continued to make tweaks based on expertise from the CBOR community, most recently revising how we defined leaves (“enclosed CBOR”). dCBOR is crucial for deterministic data formats such as Gordian Envelope because it ensures that data is always encoded in the same way, no matter when or where the encoding is done. We have high hopes that dCBOR will be an IETF standard soon.

Hashed Data Elision. We previously authored an Internet-Draft for Gordian Envelope, which we’ve continued to update in connection with our dCBOR updates. We’ve been getting less traction here, so we supplemented it this quarter with a problem statement on Deterministic Hashed Data Elision Internet-Draft. We also presented on hashed data elision at IETF 119 Dispatch. Our core premise was that privacy and human-rights needs are not well supported in IETF standards. We believe that hashed data elision (including Gordian Envelope) should be used as an easy method to address those needs. Unfortunately, the IETF hasn’t been strong on privacy concerns. Previous RFCs on Privacy and Humans Rights Considerations are mere recommendations with no weight. The bottom line seems to be that unless an existing protocol expresses a desire for privacy standards, there’s no place for hashed data elision in the IETF, though the IRTF, which focuses on “Research” instead of “Engineering”, might be a home for it.



FROST. FROST, a threshold scheme for Schnorr signatures, originated with a paper in 2020. We’ve been looking forward to its deployment in wallets because of its improved resilience and privacy, plus other advantages such as being able to change thresholds offline. See our FROST page and Layperson’s introduction to Schnorr for some foundational info. In the last six months, we’ve been doing our share to help FROST become a reality. In Q4, we held an implementer’s round table to allow people working on FROST to talk to each other. This quarter, one of those implementers, Jesse Posner, gave a presentation at our most recent Gordian Developers meeting to help to introduce developers to the powers of Schnorr and FROST. Dare we say: winter is coming? At least, FROST is.

Be sure to also check out our January Gordian Developers Meeting with its focus on the “Multi-Part Implementation Guide for URs” and our February Gordian Developers Meeting with its Gordian SeedTool 1.6 demo, and be sure to sign up for the Gordian Developer announcements list or Signal channel so that you can hear about the demos or talks at future meetings.

January February FROST Specification Docs

We want to make sure that our specifications are easily adaptable, especially to developers who might want to implement them. As a result, in Q1, we added two new implementation guides and revised how we flag the status of our specifications.

Multipart UR Implementation Guide. Multipart Uniform Resources (MURs) are Blockchain Commons’ biggest success because they allow for the interoperable and efficient creation of Animated QRs. They’ve been adopted by over a dozen wallets, mainly to pass PSBTs, but they can also pass other large data sets over an airgap: we’ve even tested megabytes! (video link). The pioneering MUR developers based their implementations on our reference code. This quarter we supplemented that with a MUR Implementation Guide that still focuses on our code, but offers explanations of how MURs work and precise instructions on how to make them work for you. Also see our January Gordian Developers Meeting for a walk-through of the Implementation Guide.

Request & Response Implementation Guide. The heart of Gordian Envelope is its ability to elide information while still allowing both certification and verification of that data. That’s the Hashed Data Elision concept that we presented at IETF. However, Gordian Envelope has much more functionality than just hashed data elision, including literal functions, which can be used in a request-response system where one interoperable system asks for something and another provides it. Our request/response docs were spread across a variety of smaller docs such as the Expressions Research doc and our source code, so we conslidated it all into a single Gordian Transport Protocol Implementation Guide, which describes the layers that build up to GTP and how to do requests and responses with Gordian Envelope. As for why you might want to, our new Improving Multisigs with Request/Reponse document presents an important case study on the usage of this system: it makes very difficult processes, like creating a multisig with multiple devices, much more accessiblem by reducing decision, research, and human-initiated actions, and thus much more likely to be used.


Updated Research List. All of our specifications can be found in our Research list. Since we have specifications at a variety of levels of development, from pure investigation on our part to implementation by multiple partners, we introduced a status listing that tells developers exactly how mature each specification is.

Reference Releases

Specifications are just one step in creating interoperability. We also produce reference apps and libraries that demonstrate how to use our specifications and how to incorporate best practices.

Gordian SeedTool 1.6. Our top reference app has long been Gordian SeedTool, which demonstrates specifications like Gordian Envelope, SSKR, and URs as well as best practices for safe and resilient digital-asset holding. We’ve been working on version 1.6 for a long time, but it’s now finally out, through GitHub and the Apple App Store. It includes updates to our newest specifications, new best-practices for UIs, and integration of Tezos assets.

Gordian Server 1.1. Our oldest supported reference app is Gordian Server, a Macintosh app that installs, maintains, and run Bitcoin Core. It demonstrates our Quick Connect URI, but more importantly it shows off some of our architectural fundamentals, such as maintaining partitioned services that are separated from each other by a TorGap. The new 1.1.0 release of Gordian Server updates for Apple’s native Arm64 (M1+) chips and also works with newer version of Bitcoin Core, up to and including the newly released Bitcoin Core 26.1. It also contains a major update that’s been a few years coming that replaces the older RPC password files with the much more secure rpcauth system. (Thanks to Peter Denton for this update and check out his Fully Noded for a wallet that integrates with Gordian Server!)

Rust Crate Updates. Over recent quarters, we converted our fundamental crypto libraries to Rust. We are continuing to keep them updated with our newest specifications and continuing to polish them, such as our recent work on bc-envelope-rust to streamline the API and reduce calls to clone().

Constrained Devices Research

Specifications, reference apps, and reference libraries represent some of our more mature work, but we’re also constantly researching other fields that might improve the management and usage of digital assets and identity online. One of our fields of research recently has been on constrained devices.

JavaCards. Could we hold assets or private keys on NFCs or JavaCards? We’ve discussed the topic at some of our recent Gordian meetings and have a Signal group dedicated to the topic, which you’re welcome to join. We’re hoping to do more with it in 2024.

no_std in Rust. Our Rust crate for bc-dcbor-rust now supports no_std. The no_std crate attribute allows Rust code to run on embedded systems and other constrained environments that don’t have access to the standard library. This means that dCBOR can now be used to serialize and deserialize data in firmware for IoT devices, microcontrollers, and smart cards. It’s another step forward in our support of constrained environments.

Provenance Research

How can you validate the provenance of data or identity? This is a natural expansion of our work with Gordian Envelope, which allows validation of data even when it’s been elided, so it’s been another source of research in the last quarter.

C2PA. We have joined the Coalition for Content Provenance and Authenticity (C2PA, which is focused on developing standards for certifying the provenance of media content. We’ve talked with them some about Gordian Envelope as a possible tool for this purpose.

Source-Code Provenance. We’ve long been thinking about source-code provenance and the validation of software developers, going back to our support for Joe Andrieu’s Amira Use Case, which grew out of RWOT5. Our software release use cases discuss many of the issues and how to resolve them with Gordian Envelope. More recently, we’ve been investigating SSH signing, which is now supported at GitHub. We’re working on a doc of best practices and also an SSH tool that will link up the envelope-cli with ssh-keygen. We’ve got a working prototype and expect to be able to talk more about the project, and the issues with software-release provenance, next quarter.

Identity Research

Work on identity, ultimately stemming from Christopher Allen’s “Path to Self-Sovereign Identity”, and continued work on DIDs and VCs at Rebooting the Web of Trust, was one of the things that got Blockchain Commons going in the first place. However, our partners and patrons have all been focused more on digital assets, so that’s where most of our work has been concentrated over the last five years. Nonetheless, we keep our foot in the identity pond, particularly for some of our Advocacy work.

eIDAS Dangers. Late in 2023, Christopher published an article on “The Dangers of eIDAS”, which is Europe’s new European Digital Identity regulation. Unfortunately, besides having some deeply flawed security models, it also ignores many of the dangers of the past.

Identity Dangers. What are those dangers? That’s the topic of “Foremembrance”, a book that Christopher been working on for a few years that remembers how overidentification led to genocide during World War II. On March 27, which is Foremembrance Day, Christopher gave a talk on the topic on Twitter. The YouTube video and the presentation are both available.

The Future

Many of these topics will continue onward, especially our more research-focused projects, such as our look at SSH signing and software provenance. We’re also hoping to do some major work with one or more of our partners to turn many of our specifications into deployed reality.

Our next Gordian Meeting, on May 1st, will have another feature presentation: Dan Gould will talk about PayJoin.

Finally, our advocacy is heating up again as the Wyoming legislature prepares for its meetings in May through September. We are engaged in discussions and early agenda setting with the co-chairs of the Wyoming Select Committe on Blockchain. We are hoping to see topics such as digital attestations from the Secretary of State, best practices for data minimization, and duties of care and best practices for digital identity on the agenda, as well as topics that we’ve pushed in the past such as Wyoming eResidency.

Blockchain Commons needs funding to continue its work as an architect for interoperable specifications and a central forum for their discussion because many of our funding sources dried up due to the economic conditions of recent years. If you’re a developer, please consider becoming a Benefactor to lend your name to our efforts, and if you’re a company, especially one using our work, please consider becoming a Sustaining Sponsor. We’re also open to working with partners on special projects that are open-source and aligned with our objectives, or that accelerate the deployment of our specs in your products. If sponsorship or a special project interest you, please drop us a line for more information.

We have further been seeking grants to continue our work. If you have the inside track on any grants that you think would be well-aligned with our work, or just want to make suggestions, again drop us a line.

Blockchain Commons is literally a commons: we are producing work that we hope is useful for the rest of the community, some of which is now widely deployed. But, the name might have been too apt, because the Tragedy of the Commons has long said that public resources of this sort are depleted. Help us replenish our resources to make sure the Commons continues!

Monday, 15. April 2024

FIDO Alliance

FIDO Paris Seminar: Mastering Passkeys, the Future of Secure Authentication:

FIDO Alliance and host sponsor Thales held a one-day seminar in Paris for a comprehensive dive into passkeys. The seminar provided an exploration of the current state of passwordless technology, detailed […]

FIDO Alliance and host sponsor Thales held a one-day seminar in Paris for a comprehensive dive into passkeys. The seminar provided an exploration of the current state of passwordless technology, detailed discussions on how passkeys work, their benefits, practical implementation strategies and considerations, and case studies. 

Attendees had the opportunity to engage directly with those who are currently implementing FIDO technology through open Q&A, networking and exhibits to get first-hand insights on how to move their own passkey deployments forward.

View the seminar slides below:

The State of Passkeys with FIDO Alliance.pptx

A Deep Dive on Passkeys: FIDO Paris Seminar.pptx

Digital Identity is Under Attack: FIDO Paris Seminar.pptx

Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx

Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx

The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx

The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx

Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx

Hyperledger Foundation

Apply Now for the Annual Hyperledger Mentorship Program!

Are you passionate about blockchain and eager to impact the field? Ready for a structured, hands-on opportunity to learn the ropes of open source development? Looking for a pathway to develop source code, documentations, and research skills while helping to advance open source projects and communities? Then you should apply for the annual Hyperledger Mentorship Program.

Are you passionate about blockchain and eager to impact the field? Ready for a structured, hands-on opportunity to learn the ropes of open source development? Looking for a pathway to develop source code, documentations, and research skills while helping to advance open source projects and communities? Then you should apply for the annual Hyperledger Mentorship Program.


Identity At The Center - Podcast

Dive into a conversation that marries the complexity of iden

Dive into a conversation that marries the complexity of identity management with the subtleties of winemaking. Our latest episode features John Podboy, a cybersecurity SVP and a wine enthusiast, who shares his insights on the future of IAM in the banking industry, the role of AI, and the potential of FIDO2. Plus, discover his unique perspective on how vineyards mirror the growth and challenges of

Dive into a conversation that marries the complexity of identity management with the subtleties of winemaking. Our latest episode features John Podboy, a cybersecurity SVP and a wine enthusiast, who shares his insights on the future of IAM in the banking industry, the role of AI, and the potential of FIDO2. Plus, discover his unique perspective on how vineyards mirror the growth and challenges of digital identity. Don't miss this rich blend of topics. Listen now and enrich your understanding of the identity landscape.

#iam #podcast #idac

Friday, 12. April 2024

DIF Blog

Presentation Exchange v2.1: Working Group Approval

We are excited to announce that Presentation Exchange v2.1 has reached a significant milestone and is now under review for Working Group Approval. This update marks a critical step forward marking the specification's continued adoption. Community members and stakeholders are encouraged to provide their feedback by April

We are excited to announce that Presentation Exchange v2.1 has reached a significant milestone and is now under review for Working Group Approval. This update marks a critical step forward marking the specification's continued adoption. Community members and stakeholders are encouraged to provide their feedback by April 26, 2024. Barring any significant objections, the proposal will transition to the Working Group Approved state and subsequently seek the approval of the DIF Steering Committee.

What’s New in v2.1?

The latest iteration is a minor release, bringing with it several important updates for stability and adoption:

Security Enhancements: We have introduced a "Security Considerations" section, helping users navigate the security implications of the exchange more effectively. Expanded Use Cases: A new "Use Cases" section has been added. This aims to broaden the understanding and applicability of the Presentation Exchange, providing examples and scenarios where it can be implemented. Editorial Improvements: To further enhance the readability and clarity of the documentation, we have made various editorial changes throughout the text. Future Developments

It’s important to note that as a minor release, v2.1 does not incorporate any breaking changes. This decision ensures stability and backward compatibility. Future potential enhancements are currently being explored and can be tracked via the "Future" tagged issues in our GitHub repository.

We Want to Hear from You!

Your input is invaluable to us. We invite all community members to review the proposed changes and share their feedback via the Github issues. Your insights will play a pivotal role in shaping the final version of Presentation Exchange v2.1 and future iterations. Together, we can continue to evolve and strengthen this essential standard.


FIDO Alliance

Tech Radar: Bitwarden now supports passkeys on iOS devices

Popular free password manager Bitwarden now supports passkeys on iOS devices. The news follows the recent trend of password managers bringing passkey support to mobile, including Keeper and Proton Pass. […]

Popular free password manager Bitwarden now supports passkeys on iOS devices. The news follows the recent trend of password managers bringing passkey support to mobile, including Keeper and Proton Pass. Bitwarden added passkey support to its desktop browser extension last year, and now users can create and store passkeys on their iOS app too. Android support is yet to arrive, however.


GB News: Elon Musk just killed passwords on X, here’s what you need to use a passkey to login

Passkeys were developed by the FIDO Alliance, an industry body with the stated aim of helping to “reduce the world’s over-reliance on passwords” with the likes of Apple, Google and […]

Passkeys were developed by the FIDO Alliance, an industry body with the stated aim of helping to “reduce the world’s over-reliance on passwords” with the likes of Apple, Google and Microsoft amongst its members. First promoted as an alternative to passwords back in mid-2022, the clever system relies on the same biometrics that allow you login to your iPhone, iPad, Windows PCs, Samsung phones and tablets, Android phones, and dozens more, without typing out a password or PIN.


Dark Reading: Selecting the Right Authentication Protocol for Your Business

Authentication protocols like passkeys serve as the backbone of online security, enabling users to confirm their identities securely and access protected information and services. Passkeys have been deployed by several […]

Authentication protocols like passkeys serve as the backbone of online security, enabling users to confirm their identities securely and access protected information and services. Passkeys have been deployed by several major organizations such as Google, Apple, Shopify, Best Buy, TikTok, and GitHub.


TechCrunch: X adds support for passkeys globally on iOS

X has officially extended support for passkeys to all global iOS users. This news was announced on the heels of the social media platform introducing passkeys to US-based users earlier […]

X has officially extended support for passkeys to all global iOS users. This news was announced on the heels of the social media platform introducing passkeys to US-based users earlier in January.


EdgeSecure

EdgeCon Spring 2024

Registration Now Open! The post EdgeCon Spring 2024 appeared first on NJEdge Inc.
EdgeCon Spring 2024

In partnership with The College of New Jersey, we were thrilled to bring this transformational event to a campus known for its natural beauty situated on 289 tree-lined acres in suburban Ewing Township, New Jersey, in close proximity to both New York City and Philadelphia.

EdgeCon Spring 2024 was dedicated to Excelling in a Digital Teaching & Learning Future. Featuring 15-20 breakout sessions exploring the event theme, EdgeCon Spring also featured high profile, industry leading vendors from across the academic enterprise. Attendees had the opportunity to engage with and learn from a growing community of digital learning professionals while discovering innovative solutions to help institutions solve today’s biggest digital learning challenges.

Date: April 18, 2024
Time: 9 am – 5 pm
Attendee Ticket: $49

Event Location:
The College of New Jersey
2000 Pennington Road
Ewing, NJ 08628-0718

Agenda

8 a.m.-8:30 a.m.—Check-In & Networking

8:30 a.m.-9:30 a.m.—Breakfast, Networking, & Exhibitor Connections

9:35 a.m.-10:35 a.m.—General Session: AI and the New Era of Learning: How Higher Education Must Respond

10:45 a.m.-11:25 a.m.—Breakout Sessions

11:35 a.m.-12:15 p.m.—Breakout Sessions

12:15 p.m.-1:20 p.m.—Lunch, Networking, & Exhibitor Connections

1:30 p.m.-2:10 p.m.—Breakout Sessions

2:20 p.m.-3:00 p.m.—Breakout Sessions

3:10 p.m.-3:50 p.m.—Breakout Sessions

3:50 p.m.-5:00 p.m.—Snacks/Coffee, Networking, & Exhibitor Connections

C. Edward Watson, Ph.D.

Associate Vice President for Curricular and Pedagogical Innovation and
Executive Director of Open Educational Resources andDigital Innovation,
American Association of Colleges and Universities (AAC&U)

Announcing EdgeCon Spring 2024 Keynote Speaker

Generative AI tools, such as ChatGPT, Claude, Gemini, and others, have had an astonishing impact on the ways we learn, work, and think over the past year.  Initially, the concern for many in higher education was how students might use these tools to complete assignments; however, a much more complex and daunting challenge has emerged.  A 2023 Goldman Sachs report analyzed tasks versus jobs and concluded that two-thirds of current occupations could be partially automated by AI. This doesn’t mean that two-thirds of jobs will be replaced by AI, though some positions will indeed be lost to the new technology; rather, most of our graduates will soon be asked to collaborate with AI to complete significant portions of their work each week.

Drawing from the presenter’s new book, Teaching with AI: A Practical Guide to a New Era of Human Learning (Johns Hopkins University Press), this keynote will explore the evolving AI landscape and detail the companion challenges and opportunities that are emerging for higher education.  While academic integrity and AI detection will be discussed, the core focus of this keynote will be on concrete approaches and strategies higher education can adopt, both within the classroom and across larger curricular structures, to best prepare students for the life that awaits them after graduation.

At AAC&U, he leads national and state-level advocacy and policy efforts to advance quality in undergraduate student.  Before joining AAC&U, Dr. Watson was the Director of the Center for Teaching and Learning at the University of Georgia (UGA).  At UGA, he led university efforts associated with faculty development, TA development, student learning outcomes assessment, learning technologies, and media production services.

He has published on teaching and learning in a number of journals, including Change, Diversity & Democracy, Educational Technology, EDUCAUSE Review,International Review of Research in Open and Distributed Learning, Journal for Effective Teaching, Liberal Education, Peer Review, and To Improve the Academy, and has recently been quoted in the New York Times, Chronicle of Higher Education, Campus Technology, EdSurge, Consumer Reports, UK Financial Times, and University Business Magazine and by the AP, CNN and NPR regarding current teaching and learning issues and trends in higher education.  His most recent book is the forthcoming Teaching with AI: A Practical Guide to a New Era of Human Learning (Johns Hopkins University Press).

Breakout Sessions Session 1: 10:45 – 11:25 a.m. Embracing or Limiting AI to Enhance Authentic Learning

Room: BSC 100

While fully ‘ChatGPT-proofing’ your course might be challenging, learn how to creatively design assignments that promote genuine student engagement. This session will guide you through innovative strategies to modify your assessment approach,  either using or limiting AI tool use to create captivating, challenging assignments that inspire authenticity and excitement in your students.

Presenter:

Ellen Farr, Assistant Director, Center for Excellence in Teaching and Learning, The College of New Jersey Judi Cook, Executive Director, Center for Excellence in Teaching and Learning, The College of New Jersey We'll Do the Dirty Work: EdgeLearn and the Realities of Digital Learning & Instructional Design Support

Room: BSC 225 East

Innovation in higher education is fueled by new approaches to instructional design and technology, partnered with advances in pedagogical theory and process. But most schools don’t have the time or budget to do it because their most talented, motivated staff and faculty are weighed down by important, but somewhat monotonous tasks and responsibilities. This session will demonstrate how EdgeLearn can lessen that burden at a non-profit price and allow you to advance your online programming with ease.

Presenter:

Joshua Gaul, Associate Vice President & Chief Digital Learning Officer, Edge Future of AI: Insights from the Next Generation

Room: BSC 225 West

Tired of the same old AI discussions? This panel flips the script! Join a conversation with college and graduate students, the future leaders in AI development and application, to hear their unfiltered thoughts and expectations. Get ready for a dynamic discussion about:

Student concerns: What ethical considerations are paramount for the next generation of AI? Emerging trends: What exciting possibilities do students see for AI in their fields? Bridging the gap: How can academia and industry better prepare students for the AI-powered future?

This isn’t your typical AI talk. Be prepared to be challenged and inspired!

Moderator:

Diane Rubino, Adjunct Assistant Professor, NYU 

Student Panelists:

Harshil Thakkar, Stevens Institute of Technology, Master’s Engineering Management Candidate Evangelia Gkaravela, Stevens Institute of Technology, Master’s Engineering Management (researcher), Space Systems Engineering Candidate Katherine Weiss, NYU, MS in PR/Corporate Communications Candidate Session 2: 11:35 a.m. – 12:15 p.m. Harnessing the Power of AI: A Foundation for Higher Education Faculty

Room: BSC 100

New to AI? This workshop is your launchpad! Designed specifically for faculty new to AI, this session will equip you with a foundational understanding of AI’s potential as you rethink activities and assessments to address AI disruption. We’ll break down key terms, explore innovative AI tools that personalize instruction, boost engagement, and deepen understanding. We will also address the challenges faced by artificial intelligence usage. Get hands-on ideas about where to start redesigning your online course with practical applications of AI tools in your field. Walk away with a solid foundation to revolutionize your teaching and student success!  

Presenter:

Laurie Hallick, Instructional Designer, Molloy University Interactive Examination of Organizational Ecosystems and Online Success

Room: BSC 225 East

This goal of this interactive discussion will be to address questions about the relationship between organizational structures and the success or demise, as well as the level of quality of online education programs. During the session we will employ dynamic online polling to gather group insights and present them visually throughout the session, as well as the opportunity to engage in a deep exploration of key questions dissecting the organizational ecosystem which includes the interplay of administrative policies, institutional culture, technology infrastructure, and student support. Through this dialogue, the goal is to identify challenges and opportunities for better synergies within institutional frameworks to advance online learning.

Presenter:

Alexandra Salas, President, Cognition Ink LLC Integrating High Performance Computing into the Undergraduate Curriculum: Insights from the School of Science at TCNJ

Room: BSC 225 West

Almost all fields that our students enter into after graduation require enhanced data science and computational skills in the modern workforce. The importance of these skill-sets will only continue to increase. TCNJ’s High Performance Computing (HPC) cluster is used for computationally driven scientific research by all departments in the School of Science and supports 500 to 700 students per academic year in both class related usage as well as faculty mentored research opportunities. In this presentation we describe how we have successfully integrated HPC into our science curriculum. allowing us to equip students directly with the skills they will need to enter the 21st century workforce, and providing faculty with a resource to engage students in transformational research experiences and hands-on learning in the classroom and the laboratory.

Presenters:

Sunita Kramer, Dean, School of Science, The College of New Jersey Joseph Baker, Professor of Chemistry, The College of New Jersey Shawn Sivy, HPC Systems Administrator, The College of New Jersey Session 3: 1:30 – 2:10 p.m. AI and the Future of Student Success

Room: BSC 100

New Jersey Institute of Technology is embracing AI to support everything from campus life to curriculum planning. In this presentation, you’ll see a glimpse of the future of student success enriched by AI. Along with their partner, Slalom, NJIT will debut their early-stage Digital Student Advisor. Ed Wozencroft, NJIT’s Vice President for Digital Strategy & CIO, will inspire you to think of the world of possibility for students and faculty… What If…?

Presenters:

Stephen Walsh, Senior Director, Public & Social Impact, Slalom Ed Wozencroft, Chief Information Officer, VP for Digital Strategy, New Jersey Institute of Technology MAGIC in Higher Education: Motivating, Active Learning, Gamifying, Imagining, and Collaborating

Room: BSC 225 East

“MAGIC in Higher Education: Motivating, Active Learning, Gamifying, Imagining, and Collaborating,” a convergent parallel design, mixed-methods study, assessed how embedding play into the architecture of a classroom can improve the learning process for students. We aimed to identify if changing the natural passive environment of a classroom to an active, play-driven environment would influence learning outcomes. Considering low national retention and graduation rates within community colleges, we examined concepts highlighting embedded play in the lesson as an extrinsic motivator to augment the learning process. 

We hypothesized that creating an abstract classroom learning environment, considering both passive and active learning can positively impact comprehension and the student learning experience. We collaborated with faculty and administration to investigate learning environments and teaching practices. We focused on the architecture and design of the classroom environment and how engaging students in play might strengthen its structure, increasing comprehension of the subject material. Our research revealed that play promotes positive experiences for students focusing on active learning.

The data exposed a dichotomy between teaching and learning; faculty primarily engage in passive lecture-based teaching, whereas students prefer active play-based learning. We recognized that a natural classroom environment is subjective depending on the discipline and pedagogy. Therefore, we engaged faculty to redesign a lecture to include play-based learning aligning with their discipline. The data from the active learning investigation revealed that participating in the playful activity significantly improved students’ understanding and application of the lesson’s content. Reflecting on our research and outcomes, we created a forum to showcase our data to faculty, administration, and students. This showcase has launched a play-based, active learning Community of Practice (CoP) for faculty professional development.

Presenters:

Dr. Jennifer Gasparino, Associate Professor, Human Services & Phi Theta Kappa Advisor, Passaic County Community College Andy Perales, Program Coordinator, Teachers Excellence Project & Phi Theta Kappa Co-Advisor, Passaic County Community College Alexandra Della Fera, Associate Professor, English, Passaic County Community College John Paul Rodriguez, Assistant Professor, Computer Information Services, Passaic County Community College

Student Presenters:

Bilal Gebril, President, Phi Theta Kappa Erick Vasquez Minaya, Provisional Membership Coordinator, Phi Theta Kappa Venus John, Honor in Action Co-Chair Beyond Barriers: Crafting Inclusive Learning Environments through Digital Accessibility and Universal Design

Room: BSC 225 West

Digital accessibility is frequently approached reactively, wherein instructors generate course content, students submit accommodation letters, and the subsequent realization of content inaccessibility prompts efforts to modify and enhance accessibility. This method proves time-consuming and perpetuates the marginalization of students by reinforcing structural and environmental barriers to learning. Rather, embracing a proactive universal design perspective in addressing digital accessibility enables instructors to prioritize the diverse needs of learners during the creation of digital content and materials. This approach minimizes the necessity for accommodations, fostering a more inclusive learning environment from the outset. While achieving digital accessibility necessitates a comprehensive commitment at the systemic and institutional levels, instructors can adopt various practices within their classrooms to advance the creation and provision of accessible course materials. This interactive workshop will guide participants in contemplating the significance of digital accessibility in higher education and in exploring practical tools for implementing digital accessibility principles across physical, hybrid, and online learning environments, grounded in a universal design approach.

Presenter:

Mel Katz, Accommodations Support Specialist for Curriculum and Assessment, The College of New Jersey HPC, AI, and Data (HPC AID) Affinity Group

Room: BSC 104

Edge, in collaboration with our partner institutions, has launched the HPC, AI, and Data (HPC AID) Affinity Group. The group aims to expand knowledge access, community, and practice in HPC and Research Computing in support of AI and data intensive research and education. 

Join us in our mission to share information and best practices related to High Performance Computing and Research Computing and Data. The group is open to anyone with focus on leveraging HPC and Data in support of Research and Education, so please invite peers or colleagues to join us. 

Session 4: 2:20 – 3:00 p.m. Unlocking Learning: The Educational Power of DIY Escape Rooms

Room: BSC 100

Escape rooms offer numerous benefits when integrated into higher education classroom settings. By presenting students with complex puzzles and challenges, escape rooms promote teamwork, communication, and critical thinking skills. Collaborative problem-solving becomes the focal point, encouraging students to leverage each other’s strengths and expertise to achieve a common goal.

During this workshop, participants will see photos of the TLTC’s Pirate Escape Room and an example of an online escape room, followed by a discussion of pros and potential pitfalls of designing one. Everyone will receive a digital copy of resources, tips, and ideas to guide them through creating an escape room of their own.

Presenter:

Kate Sierra, Instructional Designer, Seton Hall University Trends and Future Prospects of Digital Accessibility in Learning Environments

Room: BSC 225 East

This presentation explores the latest trends and future prospects of digital accessibility in learning environments, focusing on integrating Artificial Intelligence (AI), Voice User Interfaces (VUI), Augmented Reality (AR), Virtual Reality (VR), Mobile Accessibility, and Inclusive Design principles. Participants will gain insights into AI-driven accessibility solutions, the benefits of VUI in digital learning platforms, leveraging AR and VR for accessible learning experiences, mobile accessibility considerations, and strategies for incorporating inclusive design.

Presenter:

Laura Romeo, Instructional Designer, Edge Active Learning Design: Contextualizing Multimedia for Knowledge Transfer

Room: BSC 225 West

Technological advancements have made multimedia a main language for conveying information and knowledge. Multimedia elements like video, audio, animation, and interactive media enable learners to encode information in multiple formats, which leads to deeper understanding. A multimedia learning environment tailored to the context allows students to integrate and interpret relationships. This approach promotes learner-centered teaching and adheres to constructivist theory. According to this theory, learners do not passively absorb new knowledge and understanding. Instead, they actively build new knowledge by experiencing and integrating new information with their prior knowledge.

This session will explore the concept of visual thinking by delving into the cognitive psychology behind media-based instructions and their role in humanizing digital learning and fostering stronger teacher-student relationships. It will also highlight the significance of interactive multimedia in learning environments. Encouraging students to interact with and manipulate media to achieve their learning goals creates an environment that promotes learning by doing. This teaching approach promotes higher-order thinking in multiple dimensions, resulting in better knowledge retention. 

This session will also explore the distinct capability of multimedia in addressing various learning objectives and requirements and discuss the methods of integrating them into instructional design. I will use actual course examples to illustrate how and when students learn best. By participating in this session, attendees will better understand the distinct advantages of multimedia teaching and acquire practical techniques for integrating multimedia design into their courses.

Presenter:

Cecily McKeown, Instructional Multimedia Specialist, Hudson County Community College Session 5: 3:10 – 3:50 p.m. The Amazing Race: Keeping Up with GenAI at Montclair State University

Room: BSC 100

In January 2023, the instructional design team at Montclair State University began ideating a response to the advances in artificial intelligence, which broke headlines in late 2022. Since then, Instructional Technology and Design Services (ITDS) has produced a suite of web-based resources, workshops and trainings, consultations, and more to guide University faculty through discovery and exploration of GenAI to be leveraged pedagogically and mitigate misuse. In this session, Montclair instructional designers Joe Yankus & Gina Policastro will share their experience composing these resources, facilitating small and large-group faculty development, lessons learned, and goals for the upcoming year. 

Presenters:

Joseph Yankus, Instructional Designer, Montclair State University Gina Policastro, Instructional Designer, Montclair State University 10 Things I Wish I Knew About Accessible Digital Media Before Becoming an Instructional Designer

Room: BSC 225 East

Word-processed documents, presentation slide decks, PDFs, and videos can all be made ready for use by all students. It’s not just a good thing to do, it’s also the law. In this presentation, you will learn ten easy tips that can help anyone have a better experience using your digital documents. 

This session will concentrate on Microsoft documents. The concepts will be applicable to other programs available on other platforms as well as documents created in the cloud.

The big ideas include the importance of headings, alternative text for images, tables, accessibility checkers, lists, font selection and color, slide titles, saving files as PDFs, reading order, and captioning.

Presenter:

Ann Oro, Senior Instructional Designer, Seton Hall University Dual Rubrics That Offer Learning Insights

Room: BSC 225 West

Simple Rubrics support LEARNING by offering a checklist of expectations, a mechanism for delivering formative/summative evaluation, and a framework for learner reflection and self-remediation. Dual Rubrics go further to also support TEACHING by offering a means for making an inference about about students’ mastery of learning outcomes/competencies. Implementing Dual Rubrics in the Canvas LMS, at the course or program level, offers a data-driven opportunity to incorporate learning insights that support quality improvement in instructional effectiveness and curricular design.

Presenter:

Karen Harris, Senior Instructional Designer and Assessment Specialist, Rutgers University Exhibitor Sponsors Lanyard Sponsor VIP Reception Sponsor

The post EdgeCon Spring 2024 appeared first on NJEdge Inc.


AI Teaching & Learning Symposium, presented by Edge and Seton Hall University

Join us for the AI Teaching & Learning Symposium The post AI Teaching & Learning Symposium, presented by Edge and Seton Hall University appeared first on NJEdge Inc.

Join Edge and Seton Hall University for the inaugural “AI Teaching & Learning Symposium”. The symposium will consider the impact of AI on teaching, learning, and the student experience. Located in the quaint town of South Orange, New Jersey, the 58-acre main campus is only 14 miles from Manhattan.

Date: Tuesday, June 11, 2024

Event Location:
Seton Hall University
400 South Orange Avenue
South Orange, NJ 07079

Register Now » Vendor/Sponsorship Opportunities

Exhibitor Sponsorships are available. Vendors may also attend the conference without sponsoring, but at a higher ticket price.

Contact Adam Scarzafava, Associate Vice President for Marketing and Communications, for additional details via adam.scarzafava@njedge.net.

Download the Sponsor Prospectus Now » Call for Proposals

Submit your presentation topic for the upcoming AI Teaching & Learning Symposium, presented by Edge and Seton Hall University! The inaugural symposium will consider the impact of AI on teaching and learning.

Submit Proposal »

The post AI Teaching & Learning Symposium, presented by Edge and Seton Hall University appeared first on NJEdge Inc.

Thursday, 11. April 2024

Trust over IP

ToIP Announces the First Implementers Draft of the Trust Spanning Protocol Specification

Read about a protocol that is to digital trust what the Internet Protocol (IP) is to digital data. The post ToIP Announces the First Implementers Draft of the Trust Spanning Protocol Specification appeared first on Trust Over IP.
Why do we need a Trust Spanning Protocol? Where can I get a high-level overview of TSP? What does the Implementers Draft cover? How does TSP differ from other trust protocols? What implementation projects have been announced? What kind of feedback are we seeking on this draft? How can you provide feedback? Why do we need a Trust Spanning Protocol?

No one would question that the Internet has completely transformed the global information economy. It has become indispensable for connectivity and reliable content delivery. But as it has grown, so have the threats against it and the vexing challenges in deciding what to trust. 

Now, AI is pushing those concerns into overdrive. A 2023 study by CyberArk found that 93% of security professionals expect AI-enabled threats to affect their organization in 2023—with AI-powered malware cited as the #1 concern. No less an industry luminary than Marc Andreessen recently said that the war against detecting AI fakes was unwinnable—our only solution was to find a way to “invert the problem” by being able to prove content authenticity.

Why, after 30 years of steadily compounding security issues, does industry not yet have a fix? Why, with technologies like DNSSEC and TLS, and industry bodies like IETF and the CA/Browser Forum, do we still have daily headlines about data breaches, ransomware attacks, identity theft, and malware infestations? Why, with the explosive interest in generative AI, are many experts more worried about it being used to attack us than to protect us?

The answer is the reason the Trust Over IP (ToIP) Foundation was founded four years ago. In short, authenticity, confidentiality, and metadata privacy features were never built into the core fabric of the Internet. To solve the root of this problem and not the symptoms, we need a next-generation trust layer on top of the existing Internet.

The heart of this layer is a protocol that is to digital trust what the Internet Protocol (IP) is to digital data. That is the ToIP Trust Spanning Protocol (TSP).

Where can I get a high-level overview of TSP?

First, start with this blog post we published in January 2023 when we launched the ToIP Trust Spanning Protocol Task Force (TSPTF). It explains the overall purpose of the TSP and where it fits in the ToIP stack.

Second, read the Mid-Year Progress Report on the ToIP Trust Spanning Protocol, published last August 2023 to summarize the seven pillars of the TSP design. With the exception of some terminology evolution, these seven pillars have not changed as we worked through multiple stages of Working Drafts over the past seven months.

Today we are pleased to announce the release of the first Implementers Draft.

What does the Implementers Draft cover?

This table summarizes the 10 major sections of the specification:

Verifiable Identifiers (VIDs)VIDs are the first of the seven pillars — cryptographically-verifiable identifiers that provide technical trust in the TSP. Covers: why they are necessary, how they provide access to public keys and ToIP endpoint addresses, how they are verified, and how keys can be rotated.MessagesTSP is a message-based asynchronous communication protocol. Covers: message envelopes, payloads (confidential, non-confidential, headers), signatures, relationship setup and out-of-band introductions.Nested MessagesTSP messages can be nested one or two layers to achieve metadata privacy. Covers: payload nesting, nested relationship VIDs.Messages Routed through IntermediariesTSP messages can be routed through intermediaries for several reasons, e.g., asynchronous delivery, reliability, and performance. However the primary focus is metadata privacy protection. Covers: message routing, direct neighbor relationships, endpoint-to-endpoint (“tunneled”) messages, private VIDs, single intermediaries, two-or-more intermediaries.Multi-Recipient CommunicationsTSP messages may be sent to multiple recipients. Covers: multi-recipient lists and anycast intermediaries.Control Payload FieldsTSP messages can be multi-purpose, so rather than dedicated control messages, the specification defines control payloads. Covers: relationship formation (parallel, nested, third-party introductions), relationship events (key updates, routing info, and relationship cancellation).Cryptographic AlgorithmsThe authenticity and confidentiality properties of TSP relies on public/private key cryptography. Covers: public key signatures, public key authenticated encryption, encryption and decryption primitives, Hybrid Public Key Encryption (HPKE), Libsodium Sealed Box.Serialization and EncodingTSP uses Composable Event Streaming Representation (CESR) for message serialization and encoding. CESR supports popular data encoding formats including JSON, CBOR, MsgPak, and others. Covers: envelope encoding, payload encoding (non-confidential, confidential, nested), signature encoding.TransportsTSP’s authenticity, confidentiality and metadata privacy properties are designed to be independent of the choice of transport protocol. Covers: transport service interface, transport mechanism examples.Appendix A:
Test Vectors(Still being completed) Test vectors for common use cases. Covers: direct mode messages, direct mode nested messages, routed model messages. How does TSP differ from other trust protocols?

Proposing a fundamental new Internet-scale protocol for digital trust is an ambitious undertaking. Why did the ToIP Foundation take this path? Let’s start by looking at related efforts in this area.

Related protocols

This table summaries other well-known protocols that address various facets of digital trust:

OpenID Connect (OIDC)An authentication layer on top of the OAuth 2.0 authorization framework specified by the OpenID Foundation as a RESTful HTTP API using JSON as a data format. Supports basic user profile information access; optionalYeahHighlight features include encryption of identity data, discovery of OpenID providers, and session management.OpenID for Verifiable Credentials (OID4VC)A family of protocols from the OpenID Connect Working Group built on top of OIDC for issuance (OID4VCI) and presentation (OID4VP) of verifiable digital credentials, plus a wallet-based user authentication protocol (SIOP).DIDCommSpecified by the DIDComm Working Group of the Decentralized Identity Foundation (DIF), DIDComm is a peer-to-peer secure messaging protocol in which the endpoints are specified by DIDs (decentralized identifiers).TLS (Transport Layer Security)A cryptographic protocol from the IETF best known for enabling secure HTTPS browser connections; also widely used in applications such as email, instant messaging, and voice over IP. Provides security, confidentiality, integrity, and authenticity through the use of X.509 digital certificates.MLS (Message Layer Security)Specified by the MLS Working Group of the IETF, MLS is a security layer for end-to-end encrypted messages in arbitrarily sized groups. Its security properties include message confidentiality, message integrity and authentication, membership authentication, asynchronicity, forward secrecy, post-compromise security, and scalability.RCS (Rich Communications Services)A text-based mobile messaging protocol specified by GSMA to replace SMS messages with a richer feature set including in-call multimedia. RCS does not natively support end-to-end encryption; Google added it using the Signal Protocol in their own implementation. Apple has said it will support RCS once GSMA standardizes end-to-end encryption.Signal ProtocolA non-federated cryptographic protocol specified by the Signal Foundation that provides end-to-end encryption for voice and instant messaging conversations. The protocol combines the Double Ratchet Algorithm, prekeys, and a triple Elliptic-curve Diffie–Hellman (3-DH) handshake.Matrix ProtocolAn application layer communication protocol for federated real-time communication specified by the Matrix Foundation, it provides HTTP APIs for securely distributing and persisting messages in JSON format over an open federation of servers. It can integrate with standard web services via WebRTC to facilitate browser-to-browser applications.DNSSECA suite of extension specifications from the IETF for securing data exchanged in the Domain Name System (DNS). The protocol provides cryptographic authentication of data, authenticated denial of existence, and data integrity, but not availability or confidentiality.ISO/IEC 14443-4:2018A half-duplex block transmission protocol designed for a contactless environment, it defines the activation and deactivation sequence of the protocol. It is intended for use with other parts of ISO/IEC 14443 and is applicable to proximity cards or objects of Type A and Type B. Related cryptographic data structures

Protocols are not the only ingredient required for Internet-scale digital trust. This table summarizes some of the standard cryptographic data structures that have been developed:

X.509 digital certificatesAn ITU standard defining the format of the public key certificates used in many Internet protocols, including TLS, as well as  digital signatures. An X.509 certificate binds an identity (a hostname, or an organization, or an individual) to a public key using a digital signature. A certificate is signed either by a certificate authority (CA) or self-signed. X.509 also defines certificate revocation lists and a certification path validation algorithm.Encrypted/signed PDF filesPortable Document Format, originally developed by Adobe, became an ISO standard in 2008. A PDF file may be encrypted; PDF 2.0 defines 256-bit AES encryption as the standard but also defines how third parties can define their own encryption systems for PDF. ISO 32000-2 defines how PDF files may be digitally signed for secure authentication.Verifiable digital credentialsWith the emergence of digital wallets, multiple formats for cryptographically verifiable digital credentials have been developed, including the W3C Verifiable Credentials Data Model; ISO mDL/mDOC; IETF SD-JWTs; Hyperledger AnonCreds; and ToIP Authentic Chained Data Container (ACDC).C2PA content credentialsThe C2PA standards define a model for binding cryptographically verifiable provenance information to digital media content together with a model for evaluating the trustworthiness of that information. How and why is TSP different?

As the sections above show, many thousands of person-hours have been invested in protocols and cryptographic data structures designed to address the Internet’s multitude of security, confidentiality, and privacy issues. So why have the members of the ToIP Foundation spent four years developing TSP?

The fundamental reasons are summarized in this table:

Minimalist design as a spanning layer for higher-layer protocolsThe single most important design goal for the TSP—and the biggest differentiator from the protocols listed above (with the possible exception of DIDComm)—is the critical role of a spanning layer in a protocol stack. The reasons it must be “as simple as possible but no simpler” is explained at length in Principle #3 of the Design Principles for the ToIP Stack. The TSP does not include many of the features of the protocols above precisely because it is designed so those features can be provided in higher-level protocols. The benefit is that all of those higher-level protocols can be much simpler and more future-proof because they automatically inherit all the technical trust features achieved at the TSP layer.Decentralized peer-to-peer architectureBy building on HTTP and RESTful APIs, the OpenID family of protocols are inherently Web-centric (client/server). The TSP does not make that architectural assumption. Like the IP protocol, it can work between any two peers across any kind of network or software architecture.VIDs & DIDsLike DIDComm, all TSP endpoints use cryptographically verifiable identifiers (VIDs) such as those defined by the W3C Decentralized Identifiers (DIDs) specification. VIDs not only support full decentralization, but they provide portability and cryptographic agility for lifetime relationship management.Public Key Authenticated Encryption and SignatureTSP combines modern Public Key Authenticated Encryption and Public Key Signature to provide the strongest protection against both key compromise impersonation and sender impersonation. This is achieved by using either IETF RFC9180 HPKE (Hypbrid Public Key Encryption) defined Auth Mode primitives, or HPKE Base Mode or Libsodium Sealed Box primitives enhanced with an ESSR (Encrypt Sender Sign Receiver) pattern.Payload agilityTSP uses the CESR text-binary dual encoding format that supports composability of both text and binary primitives—including JSON, CBOR, and MsgPak—in the same message.Cryptographic agilityAnother key feature of CESR is code tables for all types of cryptographic primitives. For example, it can transmit any of the cryptographic data structures listed above. This enables TSP ecosystems to standardize on precise signature and encryption algorithms yet still evolve them over time.Transport independenceAlthough the name “Trust Over IP” suggests a dependence on the TCP/IP stack for transport, in fact a core design goal of TSP is to provide end-to-end authenticity, confidentiality, and metadata privacy entirely independent of the underlying transport protocol. What implementation projects have been announced?

In parallel with the first Implementers Draft, at next week’s Internet Identity Workshop #38 we will be announcing the first two TSP implementation projects—each one led by one of the primary authors of the TSP specification:

A Rust open source implementation led by co-author Wenjing Chu is being proposed as a new project at the OpenWallet Foundation (OWF) sponsored by OWF Premier members Futurewei, Gen, and Accenture. A Python open source implementation is being developed by co-author Sam Smith and his colleagues at the Web of Trust GitHub community project.

If you are interested in contributing to either of these projects or starting your own, we welcome your collaboration. Just contact us via the ToIP contact page or GitHub.

What kind of feedback are we seeking on this draft?

As always, we would like feedback on the usual questions about any new protocol specification:

Is the spec clear and understandable?  Are there any missing sections?  Are there places where more examples or illustrations would be helpful? Are there specific test vectors you would like to see added?

In addition, we are specifically looking for your feedback in the following areas:

How would you imagine using the TSP? How can it enhance what you already have? What types of trust task protocols are you most interested in layering over the TSP? Does the TSP provide the baseline capabilities you need to support your planned trust task protocol(s)? Does the TSP enable your solution to be more decentralized? What types of VIDs do you plan to implement support for? What types of transport protocols do you intend to bind to? What cryptographic algorithms do you want or need to use? What problems are you trying to solve in your tech stack that are not addressed by existing solutions and can or should be addressed by TSP? Are there other protocols in development that we are not aware of that may conflict or complement TSP? How can you provide feedback?

To review the ​​specification:

Github Pages version: https://trustoverip.github.io/tswg-tsp-specification  Markdown version: https://github.com/trustoverip/tswg-tsp-specification 

To make a comment, report a bug, or file an issue, please follow the ToIP Public Review Process on GitHub:

Bugs/Issues: https://github.com/trustoverip/tswg-tsp-specification/issues Discussion: https://github.com/trustoverip/tswg-tsp-specification/discussions

The post ToIP Announces the First Implementers Draft of the Trust Spanning Protocol Specification appeared first on Trust Over IP.


Ceramic Network

Ceramic World 03

Welcome to the third edition of CeramicWorld, our monthly ecosystem newsletter. We have lots of updates to share with you all - EthDenver recap, the latest updates on the Ceramic roadmap, the OrbisDB alpha launch, and so much more. Let’s dive in! EthDenver 2024 recap Ceramic and Tableland

Welcome to the third edition of CeramicWorld, our monthly ecosystem newsletter. We have lots of updates to share with you all - EthDenver recap, the latest updates on the Ceramic roadmap, the OrbisDB alpha launch, and so much more. Let’s dive in!

EthDenver 2024 recap

Ceramic and Tableland co-hosted the Proof of Data Summit at ETHDenver. The event was a full-day community gathering focusing on reputation, identity, DePIN, decentralized AI, and decentralized computing.

The summit featured engaging lightning talks, technical discussions, and panels with industry leaders, sparking new ideas and collaborations in decentralized technologies. We heard from Juan Benet (Protocol Labs), MetaMask, Karma3 Labs, Fluence, Gensyn, and more, who shared their expertise and perspectives, helping us gain a deeper appreciation for the power of decentralized networks.

If you missed any of these talks, you can catch up on the conversations on our YouTube channel!

Watch the recordings of Proof of Data Summit Orbis team announces OrbisDB Beta - a new database powered by Ceramic

Last month, the Orbis team kicked off EthDenver with a bang—they announced the OrbisDB beta release. OrbisDB is a new database built using Ceramic’s upcoming Data Feed API. This is a big leap in Ceramic’s ecosystem growth, as we hope to see more tools like this built on top of Ceramic.

OrbisDB offers an intuitive SQL interface to explore and query data stored on Ceramic. It also comes with an easy-to-use, no-code setup and an ORM-like SDK framework. On top of that, OrbisDB Plugins can be used to enhance OrbisDB instances and enable different actions during the stream’s lifecycle, including:

Gating mechanisms Enrichment of streams Trigger actions post-indexing Creation of streams

It’s a big leap for the Ceramic ecosystem, unlocking new tools and use cases. We already have some ideas floating around using OrbisDB Plugins as game engines.

Start building with OrbisDB today! Oamo becomes the first points provider on Ceramic

Oamo has partnered with Ceramic to take the first steps towards developing and standardizing the first decentralized points system.

Having partnered with Ceramic for a long time on projects like harnessing Ceramic's innovative DID (Decentralized Identifier) infrastructure and ComposeDB for zero-party data storage, Oamo is now the first points data provider on Ceramic. Oamo will issue tens of millions of publicly available credentials based on wallets’ on-chain activity and holdings.

This partnership will supercharge the Ceramic ecosystem:

Enhancing digital identity and engagement through credential distribution, decentralized and standardized points system, and credentials and points system SDK will make it easier for developers to build using the new points system. Allowing the users to claim their credentials and scorecards seamlessly Powering the development across multiple use cases - DeFi, NFT, Wallet Providers, Liquid Staking, Game Development, and others. Read more on Ceramic blog Points on Ceramic

Like many in the ecosystem, we’ve been thinking a lot about points lately. They represent a powerful way to attract, measure and reward users for activity, reputation and credentials. However, many teams still work with points tabulated and stored on centralized databases.

To unlock one of the core promises of web3, we’ve been engaging deeply in building and fostering the technical infrastructure for truly open, decentralized points storage.

To learn more, check out our new Points landing page. To get a deeper sense of how we’re thinking about points, read ‘Points: How Reputation & Tokens Collide’ by our co-founder, Danny Zuckerman. We built our own points application on Ceramic. Get all the technical details here, thanks to our partner engineer, Mark Krasner. Check out the Points landing page ⚠️ Breaking change notice: ComposeDB v0.7 is out. Upgrade your Ceramic server to v5.3
A recent release of ComposeDB v0.7 introduced quite a few new features, including new SET account relation when defining models and more (check out the detailed release notes). To use this version of ComposeDB, developers will have to upgrade their Ceramic server to v5.3, as it is not compatible with earlier versions of Ceramic.

This release included a few patches that enabled upgrading ComposeDB and Ceramic server independently. Depending on which versions of Ceramic and ComposeDB you have been initially running and whether or not you have been using deterministic documents, you should consider a few upgrade considerations.

Check out this forum post for more details and instructions on how to upgrade. Ceramic roadmap update
Recently, we published our quarterly Ceramic roadmap update. Over the past few months, the core Ceramic team has been making strides in improving the Ceramic server performance, shipping ComposeDB features like SET account relation, and, most importantly, taking big steps towards enabling developers to create custom indexes by announcing the Data Feed API alpha release.
Check out the detailed roadmap overview here. Build using ComposeDB's new SET account relation
Recently, we have added a new account relation to ComposeDB - SET account relation. It complements the SINGLE and LIST account relations by creating a new type of relation that must be unique per combination of user account (DID) and instance of a model. With SET account relation you can now implement features like "post likes" meaning that each user can "like" a post only once.
Check out the documentation and start building using the SET account relation. Ceramic Community Content CAIP CIP-146 After TRENDING Toward the first decentralized points system: Oamo becomes the first points provider on Ceramic DISCUSSION Orbis plugins as gaming engines; Ceramic x OrbisDB WORKSHOP Intro to ComposeDB on Ceramic; Ceramic at LearnWeb3 Decentralized Intelligence Hackathon TUTORIAL Building points on Ceramic - an Example and Learnings by Mark Krasner BLOGPOST Points: How Reputation & Tokens Collide by Danny Zuckerman Contact Us

Want to get in touch with the Ceramic core team? Fill out this form (1m). Otherwise, drop us a note in the Forum.


Velocity Network

Empowering Self-Sovereign Identity: Revolutionizing Data Control With Velocity Network

The post Empowering Self-Sovereign Identity: Revolutionizing Data Control With Velocity Network appeared first on Velocity.

Wednesday, 10. April 2024

EdgeSecure

Ecosystem for Research Networking (ERN) Summit 2024

The post Ecosystem for Research Networking (ERN) Summit 2024 appeared first on NJEdge Inc.

NEWARK, NJ, April 10, 2024 –

Dr. Forough Ghahramani, Assistant Vice President for Research, Innovation, and Sponsored Programs, Edge, and Ecosystem for Research Networking (ERN) Steering Committee member, is joining an esteemed group of scientific and cyberinfrastructure researchers at the Ecosystem for Research Networking (ERN) Summit 2024. 

The Ecosystem for Research Networking Summit provides the scientific and cyberinfrastructure research community an opportunity to come together and discuss ERN mission and accomplishments, hear from domain researchers and CI professionals at smaller institutions about the the successes and challenges related to leveraging local, regional, and national resources for projects, and learn about funding resources and partnership opportunities, as well as regional and national communities.

This year’s summit will be held April 11-12, 2024 at the Pittsburgh Supercomputing Center in Pittsburgh, PA. There will be open discussions and conversations on focus areas and policies as they pertain to areas of community interest including AI, quantum, Big Data, cybersecurity and protecting data, research instruments, workforce development, applications for ERN, education and training.

“As the co-chair of the ERN Broadening the Reach working group, gaining a better understanding of the advanced computing and resource requirements and how the ERN can support the needs of the smaller institutions, historically black colleges and universities (HBCUs), and Minority Serving institutions is an important aspect of our mission. I am excited to learn from the community how the ERN can expand outreach and increase collaboration opportunities for broadening the reach and impact in support of the research community in the smaller-less resourced institutions.”

— Dr. Forough Ghahramani
Assistant Vice President for Research, Innovation, and Sponsored Programs, Edge

Featuring a keynote speech titled Re-imagining American Innovation: Bridging the Gap to Unlock America’s Stranded Brilliance presented by Dr. James Martin, NSF Equal Opportunities in Science and Engineering Committee member, and Vice Chancellor for STEM Research and Innovation at the University of Pittsburgh, the two-day summit will also include domain researchers and CI professionals at smaller institutions about the successes and challenges related to leveraging  local, regional, and national resources for projects, and learn about funding resources and partnership opportunities, as well as regional and national communities. In addition to representation from small schools, HBCU’s, and MSI’s, we are grateful to National Science Foundation ( NSF) colleagues who will also be in attendance, providing opportunities for interaction between attendees and the NSF program directors. The full Summit agenda is available HERE.

Elaborates Dr. Barr von Oehsen, Director of the Pittsburgh Supercomputing Center, “Scientific discoveries have always been driven by advances in instruments of observation. Today, experimental tools are more advanced and more costly to construct and maintain, and the interpretation and simulation of data is more dependent on the use of cutting-edge computing resources, services, and knowledge. Consequently, many academic institutions lack access to these facilities. Our aim is to democratize access to research instruments, and the ERN Summit provides a platform for devising strategies to achieve this objective.”

For Summit details please visit the ERN Summit Events website. Any questions, please contact us via email at info@ern.ci.

About Edge

Edge serves as a member-owned, nonprofit provider of high performance optical fiber networking and internetworking, Internet2, and a vast array of best-in-class technology solutions for cybersecurity, educational technologies, cloud computing, and professional managed services. Edge provides these solutions to colleges and universities, K-12 school districts, government entities, hospital networks and nonprofit business entities as part of a membership-based consortium. Edge’s membership spans the northeast, along with a growing list of EdgeMarket participants nationwide. Edge’s common good mission ensures success by empowering members for digital transformation with affordable, reliable and thought-leading purpose-built, advanced connectivity, technologies and services.

The post Ecosystem for Research Networking (ERN) Summit 2024 appeared first on NJEdge Inc.


Oasis Open Projects

OASIS Open: the Best Suite of Standards for ESG Data Reporting and Compliance

The role of audit and assurance in environmental, social, and governance (ESG) reporting is crucial for enhancing the credibility, reliability, and accuracy of ESG disclosures. As investors, regulators, and other stakeholders increasingly rely on ESG information to make informed decisions, the demand for high-quality, verifiable ESG data grows. Auditors and assurance providers play a key […] The

By Francis Beland, Executive Director, OASIS Open

The role of audit and assurance in environmental, social, and governance (ESG) reporting is crucial for enhancing the credibility, reliability, and accuracy of ESG disclosures. As investors, regulators, and other stakeholders increasingly rely on ESG information to make informed decisions, the demand for high-quality, verifiable ESG data grows.

Auditors and assurance providers play a key role in verifying ESG reports, ensuring they meet established standards and guidelines, and providing stakeholders with confidence in the reported information. Integrating OASIS Open standards such as UBL, OData, ebXML or STIX/TAXII can significantly enhance the effectiveness and efficiency of audit and assurance processes in ESG reporting.

Enhanced Data Exchange and Interoperability

UBL and ebXML facilitate standardized electronic business document exchange. AS4, a standard for secure document exchange, ensures that ESG data and reports transmitted between entities are done securely and reliably.

Secure Data Access and Authentication

SAML can be used to secure access to ESG reporting and data systems, ensuring that only authorized individuals and entities can view or modify sensitive ESG data.

Standardization of Codes and Terms

Genericode and Code List Representation standards help in defining and using standardized codes and terminologies in ESG reporting.

Efficient Data Querying and Management

OData facilitates simple and standardized queries for data, including ESG information stored across different databases and systems. BDXR standards can be used to discover and connect with ESG reporting entities and systems, streamlining the process of obtaining necessary reports and data for auditing purposes.

Cybersecurity and Information Sharing

STIX/TAXII standards for cybersecurity threat information sharing can help auditors and assurance providers stay informed about potential cyber threats to ESG reporting systems.

Blockchain-based Verification

The Baseline Protocol offers a framework for establishing verifiable, consistent records of ESG data and transactions on public blockchains without exposing sensitive information.

By leveraging these OASIS Open standards, auditors and assurance providers can ensure that ESG reporting is not only consistent and reliable but also meets the high standards of data security, integrity, and accessibility demanded by stakeholders. These technologies enable more efficient audit processes, reduce the risk of errors, and increase the overall trust in ESG reporting.

The post OASIS Open: the Best Suite of Standards for ESG Data Reporting and Compliance appeared first on OASIS Open.


MOBI

SC-ABeam

SC-ABeam Automotive Consulting was established as a joint venture between Sumitomo Corporation, a general trading company with wide-ranging business operations that cover the automotive and mobility sectors, and ABeam Consulting, a global consulting firm originating in Asia. Drawing on the strengths of its two corporate investors, SC-ABeam will contribute to the automotive and mobility sectors [...

SC-ABeam Automotive Consulting was established as a joint venture between Sumitomo Corporation, a general trading company with wide-ranging business operations that cover the automotive and mobility sectors, and ABeam Consulting, a global consulting firm originating in Asia. Drawing on the strengths of its two corporate investors, SC-ABeam will contribute to the automotive and mobility sectors by engaging in consulting activities focused on value creation in order to achieve sustainable growth in conjunction with society. https://www.sc-abeam.com/en/

The post SC-ABeam first appeared on MOBI | The New Economy of Movement.


MyData

Data Spaces Alliance Finland: United to move faster and stronger 

The Alliance brings together Finnish pioneers in data technology and offers its members a unified view to develop solutions that cross their organizational boundaries.  The Alliance is a collaborative community working to accelerate, build, and utilize data spaces in Finland. It strives to accelerate the growth and maturity of the Finnish data space initiatives that […]
The Alliance brings together Finnish pioneers in data technology and offers its members a unified view to develop solutions that cross their organizational boundaries.  The Alliance is a collaborative community working to accelerate, build, and utilize data spaces in Finland. It strives to accelerate the growth and maturity of the Finnish data space initiatives that […]

Tuesday, 09. April 2024

We Are Open co-op

Examining the Roots

Unpacking the foundations of Verifiable Credentials Image CC BY-ND Visual Thinkery for WAO Did you ever consider that looking at a tree only reveals half of its story? Much like a tree’s roots, which stretch as a wide and deep as its branches, the visible aspect of technology barely scratches the surface. In this post, we’re going to look at the the underpinnings to technologies such as
Unpacking the foundations of Verifiable Credentials Image CC BY-ND Visual Thinkery for WAO

Did you ever consider that looking at a tree only reveals half of its story? Much like a tree’s roots, which stretch as a wide and deep as its branches, the visible aspect of technology barely scratches the surface.

In this post, we’re going to look at the the underpinnings to technologies such as microcredentials, particularly those based on Verifiable Credentials. We share two crucial insights: the importance of understanding the ideological foundations we use, and how seemingly-similar technologies can differ significantly beneath the surface.

The old adage ‘technology is not neutral’ may be true but that’s just the tip of the iceberg. Technologies achieve widespread use do so because of rich, complex backgrounds.

The Role of Standards

In our everyday lives, we interact seamlessly with technologies that allow us to pay for a coffee with our smartphones using Apple or Google Wallet. We scan QR codes to find a link to websites. We use digital boarding passes.

All of these examples are built upon standards — agreements on how technologies should operate, which are developed collaboratively by individuals and organisations. These people either have a deep interest in the topic, work on it professionally, or both!

Image CC BY-ND Visual Thinkery for WAO

The aim behind Verifiable Credentials is for them to be integrated into society to the same extent as payment methods and boarding passes. The difference is that they help us prove our identity and our achievements.

As we highlighted in a previous post, standards ensure consistency:

A standard can be just the usual way of doing something. A standard can also be a reference to make sure things can be understood in the same way, no matter where or what is referencing the standard.
For example a kilogram in France is the same as kilogram in Germany, and a kilogram of feathers weighs the same as a kilogram of bricks. The kilogram is the standard, but where it is applied or what it is used for is up to whoever is using it.

This consistency is of critical importance for credentials to remain valid and recognised, long outliving the organisations that initially adopt them.

Community Interaction

Standards don’t spontaneously arrive, but are rather the fruits of community interaction. This collaboration happens in formal settings such as the W3C or the IEEE, or through more informal groups, as was the case with ActivityPub. The latter laid the foundation for Fediverse apps such as Mastodon.

The drive to develop Verifiable Credentials has been fueled by various needs, from decentralising proof of identity, to enable the issuing of trusted documents such as passports and degrees at scale. There are also those who looking to Verifiable Credentials to finally deliver on the dream of Open Recognition.

Diversity within communities developing standards is vital. Without a range of perspectives, there is the risk that the resulting technologies serve only a fraction of its potential users. When we’re talking about proof of identity and achievement, this is an important consideration.

The Importance of Ideology

Ideologies shape the vision and goals of a community, and encompass systems of beliefs and values. In the context of Verifiable Credentials, ‘Open’ (or openness) is a key ideology. This is not just in the sense of Open Source code but in terms of broader principles around transparency, inclusivity, adapability, collaboration, and the importance of community.

Image CC BY-ND Visual Thinkery for WAO

Going back to the tree analogy at the top of this post, a microcredential issued using the Verifiable Credentials standard is deeply rooted in community consensus and open standards. This contrasts with other credentialing methods which may use proprietary technologies, lack adherence to standards, and ignore the broader ethos of openness.

So, to embrace the full potential of technologies built on Verifiable Credentials, we need to dig beneath the surface. We must understand and appreciate that complex supporting structure of community, standards, and ideology. This understanding helps guide us towards contributing to a more just, inclusive, secure, and adaptable digital ecosystem.

Conclusion

In examining the foundations of Verifiable Credentials, we uncover a complex blend of ideology, standards, and community collaboration. These core elements go beyond mere technical specifications, and help define the technology’s purpose and potential.

As we explore the underlying principles of technologies such as microcredentials, it’s evident that our interaction with these technologies should extend deeper than their surface-level function. Embracing values of openness, inclusivity, and collective effort, we can contribute to a digital landscape that safeguards individual rights while promoting innovation and trust.

We all need to dig deeper, explore the ideological foundations, understand the importance of standards, and actively participate in the communities that build our world. Reach out to us and let’s work together to help steer towards a future where digital credentials support and empower everyone.

Doug Belshaw and Laura Hilliger collaborated closely, as they tend to do, on this post.

Examining the Roots was originally published in We Are Open Co-op on Medium, where people are continuing the conversation by highlighting and responding to this story.


Content Authenticity Initiative

Durable Content Credentials

Ensuring reliable provenance information is available no matter where a piece of content goes by combining C2PA metadata, watermarking, and fingerprinting technology.

by Andy Parsons, Sr. Director, Content Authenticity Initiative

Faced with a newly chaotic media landscape made up of generative AI and other heavily manipulated content, alongside authentic real photographs, video, and audio it is becoming increasingly difficult to know what to trust.  

Understanding the origins of digital media and if/how it was manipulated - as well as sharing that information with the consumer is now possible through Content Credentials, the global open technical specification developed by the C2PA, a consortium of over 100 companies working together within the Linux Foundation. 

Implementation of Content Credentials is on the rise with in-product support released and soon-to-be released by Adobe, Open AI, Meta, Google, Sony, Leica, Microsoft, Truepic, and many other companies.  

As this technology becomes increasingly commonplace, we’re seeing criticism circulating that relying solely on Content Credentials’ secure metadata, or solely on invisible watermarking to label generative AI content, may not be sufficient to prevent the spread of misinformation. 

To be clear, we agree. 

That is why, since its founding in 2021, the C2PA has been hard at work creating a robust and secure open standard in Content Credentials. While the standard focuses on a new kind of “signed” metadata, it also specifies measures to make the metadata durable, or able to persist in the face of screenshots and rebroadcast attacks. 

Content Credentials are sometimes confusingly described as a type of watermark, but watermarking has a specific meaning in this context and is only one piece in the three-pronged approach represented by Content Credentials. Let’s clarify all of this. 

The promise of Content Credentials is that they can combine secure metadata, undetectable watermarks, and content fingerprinting to offer the most comprehensive solution available for expressing content provenance for audio, video, and images.

Secure metadata: This is verifiable information about how content was made that is baked into the content itself, in a way that cannot be altered without leaving evidence of alteration. A Content Credential can tell us about the provenance of any media or composite. It can tell us whether a video, image, or sound file was created with AI or captured in the real world with a device like a camera or audio recorder. Because Content Credentials are designed to be chained together, they can indicate how content may have been altered, what content was combined to produce the final content, and even what device or software was involved in each stage of production. The various provenance bits can be combined in ways that preserve privacy and enable creators, fact checkers, and information consumers to decide what’s trustworthy, what’s not, and what may be satirical or purely creative.   

Watermarking: This term is often used in a generic way to refer to data that is permanently attached to content and hard or impossible to remove. For our purposes here, I specifically refer to watermarking as a kind of hidden information that is not detectable by humans. It embeds a small amount of information in content that can be decoded using a watermark detector. State-of-the-art watermarks can be impervious to alterations such as the cropping or rotating of images or the addition of noise to video and audio. Importantly, the strength of a watermark is that it can survive rebroadcasting efforts like screenshotting, pictures of pictures, or re-recording of media, which effectively remove secure metadata.

Fingerprinting: This is a way to create a unique code based on pixels, frames, or audio waveforms that can be computed and matched against other instances of the same content, even if there has been some degree of alteration. Think of the way your favorite music-matching service works, locating a specific song from an audio sample you provide. The fingerprint can be stored separately from the content as part of the Content Credential. When someone encounters the content, the fingerprint can be re-computed on the fly and matched against a database of Content Credentials and its associated stored fingerprints. The advantage of this technique is it does not require the embedding of any information in the media itself. It is immune to information removal because there is no information to remove.

So, we have three techniques that can be used to inform consumers about how media came to be. If each of these techniques were robust enough to ensure the availability of rich provenance no matter where the content goes, we would have a versatile set of measures, each of which could be applied where optimal and as appropriate.  

However, none of these techniques is durable enough in isolation to be effective on its own. Consider: 

Even if Content Credentials metadata cannot be tampered with without detection, metadata of any kind can be removed deliberately or accidentally. 

Watermarking is limited by the amount data that can be encoded without visibly or audibly degrading the content, and even then, watermarks can be removed or spoofed. 

Fingerprint retrieval is fuzzy. Matches cannot be made with perfect certainty, meaning that while useful as a perceptual check, they are not exact enough to ensure that a fingerprint matches stored provenance with full confidence. 

But combined into a single approach, the three form a unified solution that is robust and secure enough to ensure that reliable provenance information is available no matter where a piece of content goes. This single, harmonized approach is the essence of durable Content Credentials.  

Here is a deeper dive into how C2PA metadata, watermarks, and fingerprints are bound to the content to achieve permanent, immutable provenance. The thoughtful combination of these techniques leverages the strengths of each to mitigate the shortcomings of the others.  

A simple comparison of the components of durable Content Credentials, and their strength in combination.

Let’s look at how this works. First, the content is watermarked using a mode-specific technique purpose-built for audio, video, or images. Since a watermark can only contain an extremely limited amount of data, it is important to make the most of the bandwidth it affords. We therefore encode a short identifier and an indicator of where the C2PA manifest, or the signed metadata, can be retrieved. This could be a Content Credentials cloud host or a distributed ledger/blockchain. 

Next, we compute a fingerprint of the media, essentially another short numerical descriptor. The descriptor represents a perceptual key that can be used later to match the content to its Content Credentials, albeit in an inexact way as described earlier. 

Then, the identifier in the watermark and the fingerprint are added to the Content Credential, which already includes data pertaining to the origin of the content and the ingredients and tools that were used to make it. Now we digitally sign the entire package, so that it is uniquely connected to this content and tamper evident. And finally, the Content Credential is injected into the content and stored remotely. And just like that, in a few milliseconds, we have created a durable Content Credential. 

When a consumer of this media wishes to check the provenance, the process is reversed. If the provenance and content are intact, we need only verify the signed manifest and display the data. However, if the metadata has been removed, we make use of durability as follows: 

Decode the watermark, retrieving the identifier it stores. 

Use the identifier to look up the stored Content Credential on the appropriate Content Credentials cloud or distributed ledger. 

Check that the manifest and the content match by using the fingerprint to verify that there is a perceptual match, and the watermark has not been spoofed or incorrectly decoded. 

Verify the cryptographic integrity of the manifest and its provenance data. 

Again, within a few milliseconds we can fetch and verify information about how this content was made, even if the metadata was removed maliciously or accidentally. 

This approach to durability is not appropriate for every use case. For example, if a photojournalist wishes to focus primarily on privacy, they may not wish to store anything related to their photos and videos on any server or blockchain. Instead, they would ensure that the chain of custody between the camera and the publisher is carefully maintained so that provenance is kept connected and intact, but not stored remotely. 

However, in many cases, durable Content Credentials provide an essential balance between performance and permanence. And although technology providers are just beginning to implement the durability approach now, this idea is nothing new. The C2PA specification has always had the facility under its affordances for “soft bindings.”  

We recognize that although Content Credentials are an important part of the ultimate solution to help address the problem of deepfakes, they are not a silver bullet. For the Content Credentials solution to work, we need it everywhere — across devices and platforms — and we need to invest in education so people can be on the lookout for Content Credentials, feeling empowered to interpret the trust signals of provenance while maintaining a healthy skepticism toward what they see and hear online.  

Malicious parties will always find novel ways to exploit technology like generative AI for deceptive purposes. Content Credentials can be a crucial tool for good actors to prove the authenticity of their content, providing consumers with a verifiable means to differentiate fact from fiction.  

As the adoption of Content Credentials increases and availability grows quickly across news, social media, and creative outlets, durable Content Credentials will become as expected as secure connections in web browsers. Content without provenance will become the exception, provenance with privacy preservation will be a norm, and durability will ensure that everyone has the fundamental right to understand what content is and how it was made. 

Subscribe to the CAI newsletter to receive ecosystem news.

Stay connected and consider joining the movement to restore trust and transparency online.

Monday, 08. April 2024

Oasis Open Projects

Invitation to comment on four OData v4.02 specification drafts

The Open Data Protocol (OData) enables the creation of REST-based data services, which allow resources to be published and edited by Web clients using simple HTTP messages. The post Invitation to comment on four OData v4.02 specification drafts appeared first on OASIS Open.

First public review for Version 4.02 specifications - ends May 8th

OASIS and the OASIS Open Data Protocol (OData) TC [1] are pleased to announce that OData Version 4.02, OData Common Schema Definition Language (CSDL) XML Representation Version 4.02, OData Common Schema Definition Language (CSDL) JSON Representation Version 4.02, and OData JSON Format Version 4.02 are now available for public review and comment.

The Open Data Protocol (OData) enables the creation of REST-based data services, which allow resources, identified using Uniform Resource Locators (URLs) and defined in an Entity Data Model (EDM), to be published and edited by Web clients using simple HTTP messages. The public review drafts released today are:

– OData Version 4.02: This document defines the core semantics and facilities of the protocol.

– OData Common Schema Definition Language (CSDL) XML Representation Version 4.02: OData services are described by an Entity Model (EDM). The Common Schema Definition Language (CSDL) defines specific representations of the entity data model exposed by an OData service using, XML, JSON, and other formats. This document specifically defines the XML representation of CSDL.

– OData Common Schema Definition Language (CSDL) JSON Representation Version 4.02: This document specifically defines the JSON representation of CSDL.

– OData JSON Format Version 4.02: This document extends the core specification by defining representations for OData requests and responses using a JSON format.

The documents and related files are available here:

OData Version 4.02
Committee Specification Draft 01
28 February 2024

— OData Version 4.02. Part 1: Protocol
Editable source (Authoritative):
https://docs.oasis-open.org/odata/odata/v4.02/csd01/part1-protocol/odata-v4.02-csd01-part1-protocol.md
HTML:
https://docs.oasis-open.org/odata/odata/v4.02/csd01/part1-protocol/odata-v4.02-csd01-part1-protocol.html
PDF:
https://docs.oasis-open.org/odata/odata/v4.02/csd01/part1-protocol/odata-v4.02-csd01-part1-protocol.pdf
— OData Version 4.02. Part 2: URL Conventions
Editable source (Authoritative):
https://docs.oasis-open.org/odata/odata/v4.02/csd01/part2-url-conventions/odata-v4.02-csd01-part2-url-conventions.md
HTML:
https://docs.oasis-open.org/odata/odata/v4.02/csd01/part2-url-conventions/odata-v4.02-csd01-part2-url-conventions.html
PDF:
https://docs.oasis-open.org/odata/odata/v4.02/csd01/part2-url-conventions/odata-v4.02-csd01-part2-url-conventions.pdf
— OData Version 4.02. ABNF components:
https://docs.oasis-open.org/odata/odata/v4.02/csd01/abnf/

OData Common Schema Definition Language (CSDL) XML Representation Version 4.02
Committee Specification Draft 01
28 February 2024

Editable source (Authoritative):
https://docs.oasis-open.org/odata/odata-csdl-xml/v4.02/csd01/odata-csdl-xml-v4.02-csd01.md
HTML:
https://docs.oasis-open.org/odata/odata-csdl-xml/v4.02/csd01/odata-csdl-xml-v4.02-csd01.html
PDF:
https://docs.oasis-open.org/odata/odata-csdl-xml/v4.02/csd01/odata-csdl-xml-v4.02-csd01.pdf
XML schemas:
https://docs.oasis-open.org/odata/odata-csdl-xml/v4.02/csd01/schemas/

OData Common Schema Definition Language (CSDL) JSON Representation Version 4.02
Committee Specification Draft 01
28 February 2024

Editable source (Authoritative):
https://docs.oasis-open.org/odata/odata-csdl-json/v4.02/csd01/odata-csdl-json-v4.02-csd01.md
HTML:
https://docs.oasis-open.org/odata/odata-csdl-json/v4.02/csd01/odata-csdl-json-v4.02-csd01.html
PDF:
https://docs.oasis-open.org/odata/odata-csdl-json/v4.02/csd01/odata-csdl-json-v4.02-csd01.pdf
JSON schemas:
https://docs.oasis-open.org/odata/odata-csdl-json/v4.02/csd01/schemas/

OData JSON Format Version 4.02
Committee Specification Draft 01
28 February 2024

Editable source (Authoritative):
https://docs.oasis-open.org/odata/odata-json-format/v4.02/csd01/odata-json-format-v4.02-csd01.md
HTML:
https://docs.oasis-open.org/odata/odata-json-format/v4.02/csd01/odata-json-format-v4.02-csd01.html
PDF:
https://docs.oasis-open.org/odata/odata-json-format/v4.02/csd01/odata-json-format-v4.02-csd01.pdf

For your convenience, OASIS provides complete packages of the prose specifications and related files in ZIP distribution files. You can download the ZIP files at:

OData Version 4.02:
https://docs.oasis-open.org/odata/odata/v4.02/csd01/odata-v4.02-csd01.zip

OData Common Schema Definition Language (CSDL) XML Representation Version 4.02:
https://docs.oasis-open.org/odata/odata-csdl-xml/v4.02/csd01/odata-csdl-xml-v4.02-csd01.zip

OData Common Schema Definition Language (CSDL) JSON Representation Version 4.02:
https://docs.oasis-open.org/odata/odata-csdl-json/v4.02/csd01/odata-csdl-json-v4.02-csd01.zip

OData JSON Format Version 4.02:
https://docs.oasis-open.org/odata/odata-json-format/v4.02/csd01/odata-json-format-v4.02-csd01.zip

How to Provide Feedback

OASIS and the OData TC value your feedback. We solicit feedback from potential users, developers and others, whether OASIS members or not, for the sake of improving the interoperability and quality of its technical work.

This public review starts 09 April 2024 at 00:00 UTC and ends 08 May 2024 at 11:59 UTC.

Comments may be submitted to the TC by any person directly at:
Technical-Committee-Comments@oasis-open.org

Comments submitted by for this work and for other work of this TC are publicly archived and can be viewed at:
https://groups.google.com/a/oasis-open.org/g/technical-committee-comments/.
Previous comments on OData works are archived at https://lists.oasis-open.org/archives/odata-comment/.

All comments submitted to OASIS are subject to the OASIS Feedback License [2], which ensures that the feedback you provide carries the same obligations at least as the obligations of the TC members. In connection with the public review of these works, we call your attention to the OASIS IPR Policy [3] applicable especially [4] to the work of this technical committee. All members of the TC should be familiar with this document, which may create obligations regarding the disclosure and availability of a member’s patent, copyright, trademark and license rights that read on an approved OASIS specification.

OASIS invites any persons who know of any such claims to disclose these if they may be essential to the implementation of the above specifications, so that notice of them may be posted to the notice page for this TC’s work.

Additional information about these specifications and the OData TC may be found on the TC’s public home page.

========== Additional references:

[1] OASIS Open Data Protocol (OData) TC
https://groups.oasis-open.org/communities/tc-community-home2?CommunityKey=e7cac2a9-2d18-4640-b94d-018dc7d3f0e2
https://www.oasis-open.org/committees/odata/

Approval (four specifications): https://github.com/oasis-tcs/odata-specs/blob/256d65b9f5f6fa5c3f6c3caa341947e6c711fb8c/zip/Minutes%20of%202024-02-28%20Meeting%20%23463.md

[2] OASIS Feedback License:
https://www.oasis-open.org/who/ipr/feedback_license.pdf

[3] https://www.oasis-open.org/policies-guidelines/ipr/

[4] https://www.oasis-open.org/committees/odata/ipr.php
https://www.oasis-open.org/policies-guidelines/ipr/#RF-on-RAND-Mode
RF on RAND Mode

The post Invitation to comment on four OData v4.02 specification drafts appeared first on OASIS Open.


Identity At The Center - Podcast

In the latest episode of The Identity at the Center Podcast,

In the latest episode of The Identity at the Center Podcast, we sit down with special guest Martin Kuppinger, Founder and Principal Analyst at KuppingerCole Analysts. We discussed topics ranging from who should oversee IAM to the end-of-life situation of SAP Identity Management. Also, we dove into the details about the upcoming European Identity and Cloud Conference in Berlin. It's an insightful c

In the latest episode of The Identity at the Center Podcast, we sit down with special guest Martin Kuppinger, Founder and Principal Analyst at KuppingerCole Analysts. We discussed topics ranging from who should oversee IAM to the end-of-life situation of SAP Identity Management. Also, we dove into the details about the upcoming European Identity and Cloud Conference in Berlin. It's an insightful conversation you won't want to miss. Listen to the full episode at idacpodcast.com or in your favorite podcast app.

#iam #podcast #idac

Friday, 05. April 2024

FIDO Alliance

Tech Telegraph: 5 easy tasks to supercharge your cybersecurity

This post summarises five ways for consumers to improve their cybersecurity. FIDO USB keys and passkeys are included. Of passkeys, the article said: “Services including Google are switching to passwordless […]

This post summarises five ways for consumers to improve their cybersecurity. FIDO USB keys and passkeys are included. Of passkeys, the article said: “Services including Google are switching to passwordless ‘passkey’ authentication that supercharges security without needing 2FA, but that technology is still in its early adoption days.”


Source Security: Introducing the latest innovation from Sentry Enterprises: The batteryless multi-funciton biometric credential

Sentry Enterprises announced its latest innovation – a multi-factor physical and logical access solution that aims to offer more affordable and secure biometric security. It is FIDO2 compliant.

Sentry Enterprises announced its latest innovation – a multi-factor physical and logical access solution that aims to offer more affordable and secure biometric security. It is FIDO2 compliant.


Android Headlines: Keeper password manager app to get Passkeys support following browser extensions

Keeper Security is introducing passkeys support to its smartphone applications, addressing the ongoing struggle websites face with user authentication methods. Passkeys, created by FIDO, allows users to ditch traditional usernames […]

Keeper Security is introducing passkeys support to its smartphone applications, addressing the ongoing struggle websites face with user authentication methods. Passkeys, created by FIDO, allows users to ditch traditional usernames and passwords and securely log in to a website, app, or other digital services.


Security Today: Mobile IDs, MFA and Sustainability Emerge as Top Trends in New HID Report

The end of passwords is near as the FIDO Alliance is paving the way toward new and more secure authentication options that will be part of a more robust Zero […]

The end of passwords is near as the FIDO Alliance is paving the way toward new and more secure authentication options that will be part of a more robust Zero Trust architecture.


GS1

Reducing the global impact of environmentally harmful anaesthetic gases using a medical device

Reducing the global impact of environmentally harmful anaesthetic gases using a medical device 95% of anaesthetic gases used in an operation are not metabolised by the patient so a significant proportion is released into the atmosphere. It is estimated that anaesthetic gases account for around 100,000 tonnes of carbon di
Reducing the global impact of environmentally harmful anaesthetic gases using a medical device 95% of anaesthetic gases used in an operation are not metabolised by the patient so a significant proportion is released into the atmosphere.

It is estimated that anaesthetic gases account for around 100,000 tonnes of carbon dioxide per year across NHS and private hospitals across the UK (covering England, Scotland, Wales and Northern Ireland). These highly volatile gases make up 2% of the National Health Service’s (NHS) total carbon footprint and 15-20% of a theatre’s carbon footprint for each operation in England alone.

SageTech Medical’s circular economy solution safely captures available volatile anaesthetic agents in a reusable capture canister (SID-Can) is recovered, processed and recycled back into a usable drug form to minimise the environmental impact.

Business goal GS1 Healthcare Case Studies 2023-2024 gs1seg230313_01_cases_studies_2024_final_.pdf

Thursday, 04. April 2024

Hyperledger Foundation

Hyperledger Aries: An Epicenter for Decentralized Digital Identity Collaboration and Innovation

How was 2023 for Hyperledger Aries, and what’s in store for 2024? Given that Aries is uniquely positioned for anyone adopting a decentralized approach to data and identity verification, let’s review what’s been a fascinating year for the project.

How was 2023 for Hyperledger Aries, and what’s in store for 2024? Given that Aries is uniquely positioned for anyone adopting a decentralized approach to data and identity verification, let’s review what’s been a fascinating year for the project.


Elastos Foundation

Meme Contest: Win Your Share of 250 ELA in the Ultimate Crypto Meme Challenge!

In an age where dogs with hats become multi-billion dollar projects and creativity and fun intersects with the unpredictable and wild world of cryptocurrency, a contest emerges not just as a competition but as a canvas for the expression of wit, satire, and ingenuity. We are talking about the Elastos meme contest! Starting today, this […]

In an age where dogs with hats become multi-billion dollar projects and creativity and fun intersects with the unpredictable and wild world of cryptocurrency, a contest emerges not just as a competition but as a canvas for the expression of wit, satire, and ingenuity. We are talking about the Elastos meme contest! Starting today, this contest like no other, offering a total bounty of 250 ELA.

The goal is to be Elastos related and to make the community laugh, where 10 winners, each poised to claim their share of 25 ELA (~$100), navigating through the realms of $ELA, Elastos, BeL2, and the SmartWeb. Here are the core details!

 Contest Details: Total Prize: 250 ELA Number of Winners: 10 Prize per Winner: 25 ELA Selection Method: Winners chosen by Elastos Social Media Team Contest Duration: Start Date: April 4, 2024 End Date: April 30th, 2024 at 11:59 PM UTC Participation Requirements: Create a meme that includes at least 2 of the following: $ELA Elastos Bitcoin Layer2 SmartWeb Share your meme on X(formerly Twitter) with the hashtags: #ElastosMemes and #Elastos

Best of luck fellow memers! We look forward to sharing all the creativity and announcing the winners on May, 7th!

Here is a meme to quick-start the contest:

Make a meme easily online for free

We Are Open co-op

Building a Minimum Viable Community of Practice (MVCoP)

Using a design workshop to jump start your CoP At the end of the month, we’ll be gathering a number of community-based organisations, interns and faculty in an online design workshop to jump start a Community of Practice together with Cal State University and Participate. This community, supported through the Internet for All programme, will provide an online space for those promoting access
Using a design workshop to jump start your CoP

At the end of the month, we’ll be gathering a number of community-based organisations, interns and faculty in an online design workshop to jump start a Community of Practice together with Cal State University and Participate. This community, supported through the Internet for All programme, will provide an online space for those promoting access and training in digital technologies in the CSUDH community.

We know that building a thriving and supportive online community requires some deliberate efforts. In this blog post, we explore the idea of a “minimum viable community of practice” (MVCoP). Once again, we dive into our Architecture of Participation (AoP) framework, and discuss running design workshops to empower individuals to co-design their community.

header in the budding CoP on Participate What is a MVCoP? cc-by-nd Visual Thinkery for WAO

Let’s start with the value of a Community of Practice. Communities of Practice (CoPs) are important for people to network, share knowledge, and learn from each other. They play a vital role in supporting lifelong learning and helping people level up throughout their careers. By connecting with like-minded peers who share similar interests and expertise, people can tap into a vast pool of collective wisdom and experience. Through active participation in discussions, collaborative projects, and sharing of best practices, community members continuously learn from each other, gaining new insights, perspectives, and opportunities.

Communities of practice not only foster professional growth but also create opportunities for personal development, encouraging individuals to stay curious, adapt to change, and embrace a lifelong learning mindset. We develop real relationships in these communities, and we learn about belonging and acceptance.

But before all of that, we have to design spaces that are supportive, interesting, engaging and inclusive. This takes intention. A MVCoP is a community set up to encourage organic and collaborative growth and belonging.

The Bare Minimum AoP AoP from WAO

We’ve written extensively about one of the tools, the Architecture of Participation (AoP), that we use to help us build thoughtful, inclusive and empowering communities. Briefly, because we really have written about this all over the internet, the AoP is 8 steps to help people cover all their bases when thinking about volunteering, contributing and facilitating communities.

But what are the key steps for building a Minimum Viable Community of Practice? Well, we think that the most important to start with are:

Ways of working openly — Even at the very beginning, people need to see what’s happening within the community and how they can get involved. The open principles of transparency, inclusivity, collaboration and adaptability underpin community. Does the project have secret areas, or is everything out in the open? Backchannels and watercoolers — We need places to share memes, make jokes and chat about the weather. These evolve organically, but including them in your MVCoP design suggests understanding of the social dynamics within groups of people. Are there ‘social’ spaces for members of the project to interact over and above those focused on project aims? Celebration of milestones — Building a space where people belong means thinking of them as, you know, people. We need motivation and recognition. Does the MVCoP recognise the efforts and input of the community members? Empowering Co-Design cc-by-nd Visual Thinkery for WAO

Running a design workshop is a great way to empower co-design within a community. By bringing together individuals with diverse perspectives, skills, and experiences, a design workshop creates a collaborative space where participants can actively contribute to shaping the emerging community.

We facilitate such workshops to encourage open dialogue, brainstorming, and hands-on activities. We aim to help people feel a sense of ownership and engagement because, after all, this is a community. This inclusive approach means that we can work together on the community’s mission, invitations and onboarding. Our intentions and goals are co-created and reflect the collective aspirations and needs of the group.

By actively involving community members in the design process, a design workshop not only strengthens the sense of belonging but also ensures that the resulting community is truly representative of its members’ interests, resulting in a more vibrant and sustainable community of practice.

Conclusion

Building a minimum viable community of practice (MVCoP) is a dynamic and iterative process that requires active involvement from community members. Empowering people to co-design their community will help that community evolve. The journey to a thriving community starts with small steps, but with dedication and collective effort, it can lead to a flourishing network of professionals supporting each other’s growth and success.

Need some community help? We’ve written an entire guide! Or you are very welcome to get in touch!

Building a Minimum Viable Community of Practice (MVCoP) was originally published in We Are Open Co-op on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wednesday, 03. April 2024

Trust over IP

ToIP Announces the Implementers Draft of the Trust Registry Protocol Specification V2.0

Creating a simple and consistent way to programmatically get answers from authoritative ecosystem sources. The post ToIP Announces the Implementers Draft of the Trust Registry Protocol Specification V2.0 appeared first on Trust Over IP.

The Trust Registry Task Force (TRTF) at the Trust Over IP (ToIP) Foundation has released a new version of its ToIP Trust Registry Protocol Specification as an Implementers Draft. This draft aims to elicit open public feedback from implementers of any type of online registry service or client software. Instructions for providing feedback are at the end of this post.

Background

The TRTF was established in the summer of 2021 in response to the sudden demand for cross-jurisdiction verification of digital health credentials during the COVID crisis. In the fall of 2021, the TRTF produced a preliminary version of the ToIP Trust Registry Protocol Specification to begin public experimentation with the protocol.

As the adoption of digital wallets and verifiable digital credentials has grown, so has the challenge for relying parties to verify the authorized issuers of those credentials. The same applies to credential holders, who need to judge which relying parties they can safely trust with their credential data.

These digital trust decisions are complicated—in both directions. To make them more accessible, participants need trusted sources of information. That’s the job of trust registries. A trust registry is a system of record that contains answers to questions that help drive trust decisions. 

Many of these systems of record already exist. For example, almost any legal jurisdiction has a method of registering and licensing all types of businesses and professionals (CPAs, lawyers, doctors, professional engineers, etc.) And there are hundreds of registries of accredited institutions—universities, hospitals, insurance companies, nursing homes, etc.

New trust registries are also emerging for new online communities, including social networks, blockchains, and peer-to-peer networks. The challenge is that the methods of accessing the information across all these different registries are wildly inconsistent—if such information is available online.

The Trust Registry Protocol V2.0

The ToIP Trust Registry Protocol (TRP) V2.0 aims to solve this problem by providing a simple and consistent way to discover who is authorized to do what within a specific digital trust ecosystem. In short, it enables parties to ask programmatically:

Does entity X hold authorization Y under ecosystem governance framework Z?

In addition to that core query type, the TRP V2 also supports queries to:

Assist integrators in retrieving information critical to interacting with the trust registry (e.g. get a list of supported authorizations, namespaces, or resources). Assert the relationships of the queried trust registry with other trust registries, allowing the development of a registry-of-registries capability.

Currently, in this Implementers Draft stage, this question can be asked via a RESTful (OpenAPI Specification 3.1.0) protocol query. Future versions of the TRP may support other underlying protocol specifications (e.g. DIDComm co-protocols, ToIP Trust Spanning Protocol). 

It is important to note that in V2, the TRP does not manage information inside the trust registry (i.e., the system-of-record). It is a read-only query protocol. Create, update, or delete operations may be specified in future protocol versions if demand exists.

To be clear, a trust registry does not create trust in itself. Your decision to trust the outputs from a trust registry is entirely yours. However, the information provided by trust registries is often required to build trust—especially between parties with no previous relationship. 

“A trust registry does not create authority. The authority of a trust registry is an outcome of governance.”

 – Jacques Latour, CTO, CIRA.ca (.ca top-level domain registry)

How to Provide Feedback

We invite feedback from implementers: systems integrators, developers, and product leaders who either need to share or access the information necessary to facilitate digital trust decisions within their ecosystem.

To review the ​​specification:

Github Pages version: https://trustoverip.github.io/tswg-trust-registry-protocol/  Markdown version: https://github.com/trustoverip/tswg-trust-registry-protocol 

To make a comment, report a bug, or file an issue, please follow the ToIP Public Review Process on GitHub:

Bugs/Issues: https://github.com/trustoverip/tswg-trust-registry-protocol/issues  Discussion: https://github.com/trustoverip/tswg-trust-registry-protocol/discussions 

The post ToIP Announces the Implementers Draft of the Trust Registry Protocol Specification V2.0 appeared first on Trust Over IP.


Identity At The Center - Podcast

Join us for the latest Sponsor Spotlight edition of The Iden

Join us for the latest Sponsor Spotlight edition of The Identity at the Center Podcast. In this fully sponsored episode, we have an insightful discussion with Gil Hoffer, the Co-Founder and CTO of Salto. We delve into Gil's journey into the world of identity, the inception of Salto, and how they're revolutionizing DevOps for business apps and identity platforms like Okta to solve age-old configura

Join us for the latest Sponsor Spotlight edition of The Identity at the Center Podcast. In this fully sponsored episode, we have an insightful discussion with Gil Hoffer, the Co-Founder and CTO of Salto. We delve into Gil's journey into the world of identity, the inception of Salto, and how they're revolutionizing DevOps for business apps and identity platforms like Okta to solve age-old configuration challenges.

Listen to our discussion on idacpodcast.com or on your preferred podcast app.

#iam #podcast #idac


Next Level Supply Chain Podcast with GS1

International Inventory Optimization with Burak Yolga, Forceget

Global Supply Chains are dramatically shifting due to economic shifts such as rising interest rates and inflation. There is a pressing need for efficiency, from reducing FBA fees to renegotiating costs and finding ingenious savings in your supply chain. Burak Yolga, Co-Founder and CEO of Forceget, talks with hosts Liz Sertl and Reid Jackson about this and his journey through the intricate world

Global Supply Chains are dramatically shifting due to economic shifts such as rising interest rates and inflation. There is a pressing need for efficiency, from reducing FBA fees to renegotiating costs and finding ingenious savings in your supply chain.

Burak Yolga, Co-Founder and CEO of Forceget, talks with hosts Liz Sertl and Reid Jackson about this and his journey through the intricate world of global supply chains. Drawing on examples from industry leaders, he offers a fresh perspective on the transformative power of digitalization in business processes, focusing on enhancing visibility and standardization to scale. He discusses the complexities of managing international teams across time zones and the critical importance of environmentally conscious shipping practices, including cost-effective innovations like solar-powered vessels.

 

Key takeaways: 

How Forceget has mastered inventory management amidst fluctuating interest rates and complex international logistics.

The financial and operational advantages of eco-innovations and resilient supply chain practices ensure professionals stay ahead of industry trends and environmental mandates.

How their team leverages AI for inventory forecasting, efficient resource allocation, and contingency planning.

 

Resources: 

What is Inventory Management?

Resources for Improving Supply Chain Visibility

 

Connect with GS1 US:

Our website - www.gs1us.org

GS1 US on LinkedIn

 

Connect with guest:

Burak Yolga on LinkedIn

Check out Forceget

 

Tuesday, 02. April 2024

Oasis Open Projects

Invitation to comment on Universal Business Language v2.4 before call for consent as OASIS Standard

UBL is the leading interchange format for business documents. The post Invitation to comment on Universal Business Language v2.4 before call for consent as OASIS Standard appeared first on OASIS Open.

Public review ends May 26th

OASIS and the OASIS Universal Business Language TC [1] are pleased to announce that Universal Business Language Version 2.4 is now available for public review and comment.

UBL is the leading interchange format for business documents. It is designed to operate within a standard business framework such as ISO/IEC 15000 (ebXML) to provide a complete, standards-based infrastructure that can extend the benefits of existing EDI systems to businesses of all sizes. The European Commission has declared UBL officially eligible for referencing in tenders from public administrations, and in 2015 UBL was approved as ISO/IEC 19845:2015.

Specifically, UBL provides:
– A suite of structured business objects and their associated semantics expressed as reusable data components and common business documents.
– A library of schemas for reusable data components such as Address, Item, and Payment, the common data elements of everyday business documents.
– A set of schemas for common business documents such as Order, Despatch Advice, and Invoice that are constructed from the UBL library components and can be used in generic procurement and transportation contexts.

UBL v2.4 is a minor revision to v2.3 that preserves backwards compatibility with previous v2.# versions. It adds new document types, bringing the total number of UBL business documents to 93.

The TC received three Statements of Use from Efact, Google, and Semantic [3].

The candidate specification and related files are available here:

Universal Business Language Version 2.4
Committee Specification 01
17 October 2023

Editable source (Authoritative):
docs.oasis-open.org/ubl/cs01-UBL-2.4/UBL-2.4.xml
HTML:
docs.oasis-open.org/ubl/cs01-UBL-2.4/UBL-2.4.html
PDF:
docs.oasis-open.org/ubl/cs01-UBL-2.4/UBL-2.4.pdf
Code lists for constraint validation:
docs.oasis-open.org/ubl/cs01-UBL-2.4/cl/
Context/value Association files for constraint validation:
docs.oasis-open.org/ubl/cs01-UBL-2.4/cva/
Document models of information bundles:
docs.oasis-open.org/ubl/cs01-UBL-2.4/mod/
Default validation test environment:
docs.oasis-open.org/ubl/cs01-UBL-2.4/val/
XML examples:
docs.oasis-open.org/ubl/cs01-UBL-2.4/xml/
Annotated XSD schemas:
docs.oasis-open.org/ubl/cs01-UBL-2.4/xsd/
Runtime XSD schemas:
docs.oasis-open.org/ubl/cs01-UBL-2.4/xsdrt/

For your convenience, OASIS provides a complete package of the prose specification and related files in a ZIP distribution file. You can download the ZIP file at:
docs.oasis-open.org/ubl/cs01-UBL-2.4/UBL-2.4.zip

Members of the UBL TC [1] approved this specification by Special Majority Vote [2]. The specification had been released for public review as required by the TC Process [4].

Public Review Period

The 60-day public review starts 28 March 2024 at 00:00 UTC and ends 26 May 2024 at 23:59 UTC.

This is an open invitation to comment. OASIS solicits feedback from potential users, developers and others, whether OASIS members or not, for the sake of improving the interoperability and quality of its technical work.

Comments may be submitted to the TC by any person directly at:
Technical-Committee-Comments@oasis-open.org

Comments submitted by for this work and for other work of this TC are publicly archived and can be viewed at:
https://groups.google.com/a/oasis-open.org/g/technical-committee-comments/
link to previous comments on UBL works: lists.oasis-open.org/archives/ubl-comment

All comments submitted to OASIS are subject to the OASIS Feedback License, which ensures that the feedback you provide carries the same obligations at least as the obligations of the TC members. In connection with this public review of Universal Business Language Version 2.4 we call your attention to the OASIS IPR Policy [5] applicable especially [6] to the work of this technical committee. All members of the TC/OP should be familiar with this document, which may create obligations regarding the disclosure and availability of a member’s patent, copyright, trademark and license rights that read on an approved OASIS specification.

OASIS invites any persons who know of any such claims to disclose these if they may be essential to the implementation of the above specification, so that notice of them may be posted to the notice page for this TC’s work.

========== Additional references:
[1] OASIS Universal Business Language TC
https://groups.oasis-open.org/communities/tc-community-home2?CommunityKey=556949c8-dac8-40e6-bb16-018dc7ce54d6
former link: https://www.oasis-open.org/committees/ubl/

[2] Approval ballot:
https://groups.oasis-open.org/higherlogic/ws/groups/556949c8-dac8-40e6-bb16-018dc7ce54d6/ballots/ballot?id=3818

[3] Links to Statements of Use

Efact: https://lists.oasis-open.org/archives/ubl-comment/202402/msg00003.html Google: https://lists.oasis-open.org/archives/ubl-comment/202402/msg00001.html Semantic: https://lists.oasis-open.org/archives/ubl/202312/msg00007.html

[4] History of publication, including previous public reviews:
https://docs.oasis-open.org/ubl/csd02-UBL-2.4/UBL-2.4-csd02-public-review-metadata.html

[5] https://www.oasis-open.org/policies-guidelines/ipr/

[6] https://www.oasis-open.org/committees/ubl/ipr.php
https://www.oasis-open.org/policies-guidelines/ipr/#RF-on-Limited-Mode
RF on Limited Terms Mode

The post Invitation to comment on Universal Business Language v2.4 before call for consent as OASIS Standard appeared first on OASIS Open.


GS1

Andrew Tuerk

Andrew Tuerk Chief Data Officer glenda.fitzpatrick Tue, 04/02/2024 - 18:43 Member excellence Syndigo Andrew Tuerk
Andrew Tuerk Chief Data Officer glenda.fitzpatrick Tue, 04/02/2024 - 18:43 Member excellence

Syndigo

Andrew Tuerk

Trust over IP

ToIP Announces New Issuer Governance Requirements Guide for Verifiable Credentials – Public Comment Needed

Comment on our first effort to define standard requirements for issuers of verifiable credentials. The post ToIP Announces New Issuer Governance Requirements Guide for Verifiable Credentials – Public Comment Needed appeared first on Trust Over IP.

ToIP invites the public to comment on their newly released document, Issuers Requirements Guide for Governance Framework of Verifiable Credentials.

The mission of the Trust over IP (ToIP) Foundation is to define a complete architecture for Internet-scale digital trust that combines cryptographic assurance at the machine layer with human accountability at the business, legal, and social layers.  Part of that mission is to define generally accepted requirements for standard roles that play a critical part in accountability for digital trust.  

Our Governance Stack Working Group has completed a new deliverable, the Issuer Requirements Guide for Governance Frameworks of Verifiable Credentials (PDF) and is soliciting public comment using our public review process and this GitHub link. While many schemes in the US, UK and Canada have focused on elements of identity credential issuance and verification, this is the first effort to define standard requirements for issuers of verifiable credential to ensure that their processes are transparent and consistent and meet the needs of relying parties and ecosystem governing bodies.

Verifiable credential ecosystems require both technical trust and human trust where the core requirements for the corresponding issuance processes are captured in this newly released document being circulated for public review and comment. Verifiable credentials are a type of digital representation of claims or attributes about a subject, which can be an individual, organization, or thing. These credentials are tamper-evident, cryptographically secure, and can be verified by relying parties without the need for a central authority.

The Governance requirements of an issuer in a verifiable credential ecosystem can be summarized as follows:

Issuance of Credentials: The issuer is responsible for creating and issuing verifiable credentials to subjects based on certain claims or attributes. These credentials are digitally signed by the issuer using their private key, ensuring the authenticity and integrity of the information. Trust and Reputation: The issuer’s reputation and trustworthiness are crucial in the verifiable credential ecosystem. Relying parties (such as service providers or verifiers) rely on the credentials being issued by reputable and trusted issuers. The credibility of the issuer is established through various mechanisms, such as being a well-known organization, being part of a recognized authority, or holding themselves accountable to the requirements of a governing authority. Validation of Claims: Before issuing credentials, the issuer does its due diligence to validate the claims made in the credential. This validation process ensures that the information presented in the credential is accurate and can be trusted by relying parties. Verification of Issuer and Holder: Issued credentials that contain links to the issuer and/or holder should be engineered so they can be cryptographically verified. Privacy Considerations: Issuers need to handle personal data responsibly and in compliance with privacy regulations. They should only collect and use the minimum necessary data required to issue the credentials and should obtain explicit consent from the subjects. Revocation and Expiry: For credentials that require expiration or revocation, issuers must have mechanisms in place to revoke or expire credentials if the claims become invalid or if the credentials are compromised. This is essential to maintain the trustworthiness of the digital trust ecosystem. Interoperability: Issuers need to follow standardized formats and protocols to ensure that the issued credentials are interoperable and can be easily understood and verified by different relying parties. Auditability and Accountability: Issuers should keep records of issued credentials for audit purposes,  lifecycle maintenance, including updates to claims, or re-issuance for any reason and revocation. This enables traceability and accountability in case of disputes or issues with the credentials. Transparency: The issuer should publicly disclose all the policies it follows in the process of claim and credential issuance and revocation. This disclosure should be included in a publicly available governance framework.

The ToIP Issuer Requirements Guide is intended to aid implementers conform to the ToIP Governance Metamodel Specification for issuers of verifiable credentials within an ecosystem governed by a governance framework that conforms to the ToIP Governance Metamodel Specification. We encourage you to read this landmark document and provide feedback using the ToIP Public Review Process by following this GitHub Link to submit comments within the public review and comment period ending on May 31, 2024.  If you have any questions regarding this document, please contact Scott Perry at scott.perry@schellman.com.

The post ToIP Announces New Issuer Governance Requirements Guide for Verifiable Credentials – Public Comment Needed appeared first on Trust Over IP.


Elastos Foundation

ELA: Bitcoin Merged-Mining, Halvings & Unique Economics

Elastos ($ELA) presents a compelling narrative in the cryptocurrency ecosystem, paralleling Bitcoin’s disinflationary ethos while charting a unique path through its technological integration and economic modelling. Let’s explore some of the key highlights of ELA and its economic model: Merge Mining with Bitcoin: ELA’s merge mining with Bitcoin allows it to benefit from Bitcoin’s substantial […]

Elastos ($ELA) presents a compelling narrative in the cryptocurrency ecosystem, paralleling Bitcoin’s disinflationary ethos while charting a unique path through its technological integration and economic modelling. Let’s explore some of the key highlights of ELA and its economic model:

Merge Mining with Bitcoin: ELA’s merge mining with Bitcoin allows it to benefit from Bitcoin’s substantial hashing power (580.74 EH/s), with ELA itself achieving an impressive 293.69 EH/s, roughly 50%. This synergy enhances security while maintaining energy efficiency. BPoS Validator System: The Elastos BPoS Supernodes add a secondary layer of security by verifying and signing each ELA mainchain block provided by Bitcoin miners. BPoS engages two participant groups: stakers and validators, with staking ELA to vote for validators and APR for both. Fixed Maximum Supply: ELA caps at 28.22 million coins, with the final coins expected to be minted by December 2105. This fixed supply mirrors Bitcoin’s scarcity principle, foundational to its value. Disinflationary Nature: ELA follows a 4-year halving cycle similar to Bitcoin, effectively cutting its annual inflation rate in half. This model ensures a gradual decrease in new ELA supply, enhancing scarcity and potential value over time. Like Bitcoin, ELA’s halving reduces the reward for block production, transitioning incentives towards transaction fees over time, and ensuring long-term network sustainability. The next halving is in December 2015. Current Mining Dynamics: With a block generation time of every two minutes, ELA rewards are distributed among Bitcoin PoW miners (35%), BPoS validators (35%), and the CRC DAO treasury (30%). This distribution model incentivizes diverse participation in the network’s security and governance.

 

The Significance of Merge-Mining with Bitcoin

Merge mining enables Bitcoin miners to mine both Bitcoin and Elastos by running Elastos code alongside, without additional costs or energy. This leverages Bitcoin’s vast hashing power (580.74 EH/s, with ELA at 293.69 EH/s) to secure both networks efficiently. By integrating Elastos’s mining process with Bitcoin’s infrastructure, miners can earn extra rewards in ELA, fostering a mutually beneficial relationship that enhances ELA’s security and economic incentives. This dual mining opportunity not only augments revenue for Bitcoin miners but also promotes a cooperative ecosystem, highlighting merge mining with Bitcoin as a strategically valuable feature for bolstering network security while being environmentally considerate. ELA today has roughly 50% of Bitcoins security protecting the network’s value.

 

Earning APR with ELA.

What’s more, you can earn ELA as a community member with Bitcoins security provided. Here are three of the core ways:

1. Participate in Staking: APR: Up to 2-3%. Lockup Duration: 10 to 1000 days. Equity Tokens: 1 staked ELA = 1 voting token. Rewards: Based on amount and duration. Profit Sharing: 25% to node owners, 75% to stakers. Special Nodes: 12 CR Council nodes excluded from voting.

Re-Voting: Necessary at pledge end to continue earning. Here is a detailed guide on how to stake.

2. Becoming a BPOS Validator: APR: Up to 22%. Entry Requirements: 2,000 ELA pledge, $6/month maintenance. Rewards: 25% of block rewards. Yield Factors: Staking amount and time. Selection: Randomly chosen 36 nodes every 36 blocks. Rewards Distribution: Automated by Elastos mainchain.

Here is a detailed guide on how to become a BPOS Validator.  Here is additional support on validator requirements.

3. Cyber Republic Council (CRC) DAO Member: APR: Up to 35% (rewards and sidechain transactions). Cyber Republic Consensus: Governance mechanism for community decisions, Elastos sidechain blockchain validation (EVM and Identity), and ecosystem development, utilizing a delegate model for decision-making and proposal voting. CR Council Member Nodes: 12 community-elected delegates using ELA to vote become responsible for decision-making on community affairs, proposal recommendation, and voting. Community Members: Rights include voting in elections, submitting proposals, and monitoring and impeaching council members. Election and Term: Participants need Elastos DIDs and a 5,000 ELA deposit. Election via ELA voting, top 12 candidates become council members. The election process starts one month (about 21,900 main chain blocks) before the current members’ term ends for seamless transitions. The next election term begins in April 2024. One-year term, with provisions for impeachment and automatic removal under specific conditions. Rewards and Responsibilities: Council members receive mainchain ELA rewards and Sidechain (EVM and DID) transaction revenue.

Learn more here.  Follow Cyber Republic Twitter for the latest updates on elections and guidance.

 

Elastos and ELA combines Bitcoin’s disinflationary approach with its own technological advancements and economic strategies, enhancing its network security through merge mining with Bitcoin and offering a BPoS Validator System for additional security and APR. With a disinflationary model and a fixed supply limit of 28.22 million ELA, the ecosystem incentivizes participation through mining rewards distribution and provides APR earning opportunities via staking, BPoS validation, and the Cyber Republic Council (CRC) DAO governance. These features, alongside its economic policies, position Elastos as a distinctive and engaging platform in the cryptocurrency realm, aligning with Bitcoin’s ethos and security. Learn more here!


DIF Blog

DIF and KuppingerCole announce collaboration

The Decentralized Identity Foundation (DIF) and KuppingerCole are excited to announce a collaboration aimed at bringing new value to members, customers and digital transformation leaders. DIF is a global membership organization that is building the foundational elements necessary to establish security, privacy, interoperability and trust between the participants in any

The Decentralized Identity Foundation (DIF) and KuppingerCole are excited to announce a collaboration aimed at bringing new value to members, customers and digital transformation leaders.

DIF is a global membership organization that is building the foundational elements necessary to establish security, privacy, interoperability and trust between the participants in any digital ecosystem.

Founded in 2004, KuppingerCole is a European analyst company focusing on identities and access management, their governance, and risk management to facilitate innovation and secure, privacy-maintaining information management.

Planned activities include a program of joint virtual events and targeted publications. The two organizations are also exploring the potential to leverage their tools, processes and operational resources to create a first-of-a-kind industry collaboration platform.

“Our strategic partnership with KuppingerCole marks a pivotal moment for decentralized identity, accelerating its impact and reach,” said Kim Hamilton Duffy, Executive Director of DIF. “Our members are at the helm of creating the next-generation infrastructure for secure, user-focused ecosystems that will transform our digital interactions. Partnering with KuppingerCole will allow broader audiences to discover these innovations and explore new capabilities and business models they enable.”

“By combining our expertise in identity and access management with DIF's global reach and commitment to building trust in digital ecosystems, we are poised to deliver unparalleled insights and solutions to enterprises navigating the decentralized identity landscape. Together, we aim to empower organizations to embrace decentralized identity technologies confidently and securely, driving innovation and fostering trust in the digital age," added Martin Kuppinger, Co-Founder of  KuppingerCole.

The partnership kicks off with Road to EIC: Leveraging Reusable Identities in Your Organization, a virtual event at 7:00am PST, 10:00am EST, 4:00pm CEST on April 03. A second virtual event, Building Trust in AI, is being planned for May. 

DIF is set to play a prominent role at the European Identity and Cloud conference (EIC) in Berlin from 4 - 7 June, including a keynote address, use case presentations and panel discussions featuring DIF leadership, members and liaison partners. DIF members are eligible for a 25% reduction on their ticket to attend the event (on top of any other discounts). Simply enter code eic24dif25members during the last step of booking Get tickets | EIC 2024 (kuppingercole.com).

 

 

Monday, 01. April 2024

OpenID

Implementer’s Draft of OpenID for Verifiable Credential Issuance Approved

The OpenID Foundation membership has approved the following specification as an OpenID Implementer’s Draft: OpenID for Verifiable Credential Issuance 1.0 This is the first Implementer’s Draft of this specification. An Implementer’s Draft is a stable version of a specification providing intellectual property protections to implementers of the specification. This specification is a product of the […

The OpenID Foundation membership has approved the following specification as an OpenID Implementer’s Draft:

OpenID for Verifiable Credential Issuance 1.0

This is the first Implementer’s Draft of this specification.

An Implementer’s Draft is a stable version of a specification providing intellectual property protections to implementers of the specification. This specification is a product of the OpenID Connect Working group.

The Implementer’s Draft is available at:

https://openid.net/specs/openid-4-verifiable-credential-issuance-1_0-ID1.html

The voting results were:

Approve – 79 votes Object – 2 votes Abstain – 12 votes

Total votes: 93 (out of 321 members = 29% > 20% quorum requirement)

The post Implementer’s Draft of OpenID for Verifiable Credential Issuance Approved first appeared on OpenID Foundation.


Identity At The Center - Podcast

April arrives with the latest episode of the Identity at the

April arrives with the latest episode of the Identity at the Center Podcast. We were joined by Jeff Reich of the IDSA to bring awareness to Identity Management Day taking place next week. We also talked about what's new with the IDSA and even shared some light-hearted thoughts on the best April Fool's pranks we've seen. You can catch the episode at idacpodcast.com or on your favorite podcast app.

April arrives with the latest episode of the Identity at the Center Podcast. We were joined by Jeff Reich of the IDSA to bring awareness to Identity Management Day taking place next week. We also talked about what's new with the IDSA and even shared some light-hearted thoughts on the best April Fool's pranks we've seen.

You can catch the episode at idacpodcast.com or on your favorite podcast app.

#iam #podcast #idac


Project VRM

Survey Hell

On a scale of one to ten, how do you rate the  Customer Experience Management (CEM) business? I give it a zero. Have you noticed that every service comes with a bonus survey—one you answer on a phone or fill out on a Web page? And that every one of those surveys is about rating […]

On a scale of one to ten, how do you rate the  Customer Experience Management (CEM) business?

I give it a zero.

Have you noticed that every service comes with a bonus survey—one you answer on a phone or fill out on a Web page? And that every one of those surveys is about rating the poor soul you spoke to or chatted with, rather than the company’s own crappy CEM system?

I always say yes to the question “Was your problem resolved?” because I know the human I spoke to will be punished if I say no.  Saying yes to that question complies with Don Marti‘s tweeted advice: “5 stars for everyone always—never betray a human to the machines.”

The main problem with CEM is that it’s all about getting service to scale across populations by faking interest in human contact. You can see it all through McKinsey’s The CEO Guide to Customer Experience. The customer is always on a “journey” through which a company has “touchpoints.”

Oh please.

IU Health, my primary provider of health services, does a good job on the whole, but one downside is the phone survey that follows up seemingly every interaction I have with a doctor or an assistant of some kind. The survey is always from a robot that says it “will only take a few minutes.” I haven’t counted, but I am sure some of those surveys last longer than the interaction I had with the human who provided the service: an annoyingly looooong touchpoint.

I wrote Why Surveys Suck here, way back in 2007. In it, I wrote,  “One way we can gauge the success of VRM is by watching the number of surveys decline.”

Makes me cringe a bit, but I think it’s still true.

The image above was created by Bing Creator and depicts “A hellscape of unhappy people, some on phones and others filling out surveys.”

Saturday, 30. March 2024

DIF Blog

DIF Newsletter #38

March 2024 DIF Website | DIF Mailing Lists | Meeting Recording Archive Table of contents Decentralized Identity Foundation News; 2. Working Group Updates; 3. Open Groups; 4. Announcements at DIF; 5. Community Events; 6. DIF Members; 7. Get involved! Join DIF 🚀 Decentralized Identity Foundation News Spring conference season The community is

March 2024

DIF Website | DIF Mailing Lists | Meeting Recording Archive

Table of contents Decentralized Identity Foundation News; 2. Working Group Updates; 3. Open Groups; 4. Announcements at DIF; 5. Community Events; 6. DIF Members; 7. Get involved! Join DIF 🚀 Decentralized Identity Foundation News

Spring conference season

The community is gearing up to put decentralized identity centre stage during the upcoming identity conference season.

Steering Committee and core team members, working / open group co-chairs and member orgs are representing DIF and participating in a range of activities to raise awareness of decentralized identity and DIF contributions, drive engagement in DIF and our key initiatives for 2024, demonstrate the viability of DI-based solutions and help people get started with DI.

Check out the Announcements section below to hear about DIF's confirmed and planned activities at IIW 38, the ID4Africa AGM, EIC and other events.

Veramo User Group

The Veramo User Group has been powering ahead since the group's first meeting on 15 February, with new SD-JWT functionality nearing release. All are welcome to participate in our thriving community of users and contributors - see Open Groups, below, for details.

China SIG

We're excited to formally welcome China SIG to the Decentralized Identity Foundation, marking a key step toward global adoption of decentralized identity to enable secure foundations for next-generation architectures. See Open Groups, below, for details of how to join the China SIG meeting on 17 April.

DIF Coffee Breaks

Our Senior Director of Community Engagement, Limari Navarrete, has kicked off a weekly Coffee Break on Twitter Spaces, with some fantastic guests lined up for April:

April 4th: MG co-founder of GoPlausible April 11th: Otto Mora from Polygon ID April 18th: Evin McMullen co-founder & CEO of Disco.xyz April 25th: Key Pillars of Quark ID: Mexico City and Buenos Aires

Past Spaces so far:

Nick Dazé CEO of Heirloom: https://twitter.com/i/spaces/1ynJOyjEqLXKR?s=20 Damian Glover, Senior Director of Communications @DIF https://x.com/DecentralizedID/status/1770882034162172386?s=20 Kim Hamilton Duffy, Executive Director @DIF https://x.com/DecentralizedID/status/1768336239168782566?s=20

Follow us on Twitter / X to set reminders for upcoming spaces. https://twitter.com/DecentralizedID

🛠️ Working Group Updates 💡Identifiers and Discovery Work Group

Andor Kesselman presented his work on Service Profiles for DID Documents: https://service-profiles.andor.us/

Daniel Buchner presented the did:dht method: https://did-dht.com/

The Linked Verifiable Presentations work item will soon progress to "Working Group Approved", reviews are welcome: https://github.com/decentralized-identity/linked-vp

Identifiers and Discovery meets bi-weekly at 11am PT/ 2pmET/ 8pm CET Mondays

🔐 Applied Crypto WG

Open source code implemented for BBS pseudonyms and BBS pseudonyms with hidden PID (based on Blind BBS).

The DIF Crypto - BBS work item meets weekly at 11am PT/2pm ET /8pm CET Mondays

📦 Secure Data Storage

Decentralized Web Node (DWN) Task Force
Nearing 1.0 of the DWN spec and implementation, with a reference app to be debuted at IIW.

DIF/CCG Secure Data Storage WG - DWN Task Force meets bi-weekly at 9am PT/12pm ET/6pm CET Wednesdays

Claims & Credentials Working Group

Credential Trust Establishment (CTE) is gaining traction as we approach IIW, with a plan to advance it to formal V1 status. Check out the latest post on the DIF blog.

If you are interested in participating in any of the Working Groups highlighted above, or any of DIF's other Working Groups, please click here.

📖 Open Groups at DIF Veramo User Group

The Veramo User Group has been meeting weekly since the middle of February. In addition to discussing Veramo use cases amongst our members and educating each other, we've been working to collaboratively improve Veramo and have new SD-JWT functionality nearing release as well as improvements to EIP-712 credentials underway. If you want to discuss use cases or have any bandwidth to help with improvements, please join us on Thursdays!

Meetings take place weekly on Thursdays, alternating between Noon EST / 18.00 CET and 09.00 EST / 15.00 CET. Click here for more details

📡 DIDComm User Group

The DIDComm User Group has been exploring and prioritising themes / opportunities including Authentication, Enhanced Chat / Group Chat, "Please call me" / Phone call related protocols, Implications of UX (App -> App?), WebRTC coordination, Using DIDComm to protect cloud-storage, DIDComm for IOT, Push Notifications and Webhook, DIDComm as a VC and B2C Protocols.

The DIDComm user group meets weekly at 12pm PT/3pm ET/ 9pm CET Mondays

📻 China SIG

The China SIG has officially launched after DIF’s Steering Committee voted to accept the China SIG Charter following a well-attended SIG kick-off meeting last month. The next meeting will take place on 17 April. All are welcome, here are the details:

Topic:DIF China SIG Monthly Meeting
Time:2024/04/17 20:00-21:00 (GMT+08:00) Beijing Time
Meeting Link: https://meeting.tencent.com/dm/ScmnTNk3pTL6
Meeting number:437-967-238
Meeting Password:2404

You can download the recording from the kick-off meeting in February here

🏦 Korea SIG

The SIG met online last month, following a face to face meeting in Seoul in January to plan activities for the year ahead.

Everyone can join us via the SIG website.

🌏 APAC/ASEAN Discussion Group

We invite everyone in the APAC region to join our monthly calls and contribute to the discussion. You will be able to find the minutes of the latest meeting here.

The DIF APAC call takes place Monthly on the 4th Thursday of the month. Please see the DIF calendar for updated timing.

🌍 DIF Africa Discussion Group

We are in need of someone to chair this process. Let us know if you’re interested in building, organizing and hosting monthly meetings.

Occurs on the first Thursday of the month at 9am UTC

☂️ DIF Interoperability Group

If you are interested in speaking in the User Adpotion and Interop series, or simply want to bounce around some interop thoughts and ideas, please reach out to group co-chairs Bonnie Yau, Brent Shambaugh or Elena Dumitrascu on Slack.

The Interoperability Group meets bi-weekly at 8am PT/11am ET/5pm CET Wednesdays

📢 Announcements at DIF

Public events calendar

DIF now has a public events calendar. 🎉

Here you will find not only DIF events but also conferences we'll be attending and participating in over the coming year. We look forward to connecting with you at these various public events! See below for how to subscribe.

Subscribe to the Calendar: Public URL; ICal Format.

Or find our calendar on the DIF Website.

Universal Resolver

DIF hosts and maintains the Universal Resolver, which is valuable public infrastructure for resolving DIDs across different methods. Experiment with it here.

🗓️ ️Community Events

ID Management Day

DIF's Executive Director, Kim Hamilton Duffy will lead a session on "Decentralized Identity for the People, and for the Non-People (NPEs that is): Updates, Trends, and Killer Use Cases" at ID Management Day on April 9. Check out the agenda for this virtual conference (hosted by IDSA) and register for free here.

Internet Identity Workshop (IIW) #38

DIF is busy gearing up for the Internet Identity Workshop's upcoming gathering at the Computer History Museum in Mountain View from 16 to 18 April. DIF's Executive Director Kim Hamilton Duffy and DIF's Senior Director of Communications Limari Navarette are looking forward to meeting with you at the event.

Planned DIF-themed sessions include The future of DIF; DIF Projects: From idea to demo (including the DIF Hackathon); and a discussion about DIF's proposed new Implementations and Applications Working Group.

Other sessions include Extending DID Documents for better service discovery with Service Profiles; Presentation Exchange v2.1 and where does it go from here?; and Credential Trust Establishment.

Look out for some exciting demos, including a decentralized social networking app built using DWNs and Open ID Connect and DIDComm working alongside each other.

This is shaping up to be an IIW not to be missed! Grab your ticket with DIF's 20% off discount here.

ID4Africa 2024 Annual General Meeting

DIF will be in Cape Town from 21 - 24 May for ID4Africa to share our insights about how VCs can be integrated with national identity programs and systems, and to connect with policy makers and implementers.

Steering Committee member Catherine Nabbala and Senior Director of Communications, Damian Glover will join Anand Acharya, Senior Project Manager of the Bhuthan NDI scheme to deliver a plenary presentation, From Policy To Reality: A Non-Technical Journey To Integrate Verifiable Credentials at 09:30 local time on 23 May.

Cathie and Damian will be available to meet throughout the event - our base is Stand A07 in the conference hall - we look forward to meeting you there!

European Identity & Cloud Conference 2024

DIF is set to play a prominent role at the European Identity and Cloud Conference (EIC) in Berlin from 4 - 7 June, with DIF staff, Steering Committee members and other member orgs participating in a series of keynotes, presentations and panel discussions.

Our involvement kicks into gear with Road to EIC: Leveraging Reusable Identities in Your Organization, a webinar hosted by the event's organisers, analyst firm KuppingerCole, on 3 April.

DIF members are eligible for a 25% reduction on their ticket to attend EIC (on top of any other discounts). Simply enter code eic24dif25members during the last step of booking: click here to buy your ticket.

Also look out for more details of our partnership with Kuppingercole on the DIF blog next week!

IEEE Intelligent Systems ’24

DIF member Ivan Lambov is chairing a session on "Beyond the hype: exploring the real-world impact of Blockchain" at the IEEE 12th International Conference on Intelligent Systems, which takes place in Varna, Bulgaria from 29 - 31 August.

Ivan would like to extend an invitation to the conference to the entire DIF community. "It will be nice to get together and meet in person with members of the DIF community from the EAME region or elsewhere this summer. This conference presents a great opportunity to get global recognition for the work the DIF is doing and the projects it is involved in. And last, but not least, the conference takes place in a beach resort at the end of August:)," Ivan added.

Check out the agenda and register here.

🗓️ ️DIF Members

Guest blog - Mailchain

Mailchain, founded in 2021, aims to revolutionize decentralized identity and communication with its services, including Vidos and the Mailchain Communication Protocol, simplifying the integration and adoption of decentralized identity technologies.We spoke to co-founder Tim Boeckmann who shared the company's journey to date.

Guest blog - David Birch

DIF caught up with David Birch, author of Identity Is The New Money, who shared his views on the development of the digital identity space, and some key challenges and opportunities for decentralized identity.

Gataca

Gataca is introducing the Higher Education Program, an initiative aimed at European universities to boost the adoption of ID wallets and verifiable credentials in education.

This program gives 20 universities free access to our decentralized identity platform for one year. They can issue and verify credentials with the universities and third parties in the program, eventually extending to all eIDAS 2.0 compliant organizations.

See the program's landing page for more details.

New Member Orientations

If you are new to DIF join us for our upcoming new member orientations. Please subscribe to DIF’s eventbrite for upcoming notifications on orientations and events which can be found here.

🆔 Join DIF!

If you would like to get in touch with us or become a member of the DIF community, please visit our website.

Can't get enough of DIF?
| Follow us on Twitter
| Join us on GitHub
| subscribe on YouTube
| read our DIF blog
| read the archives

Friday, 29. March 2024

FIDO Alliance

Silicon Republic: The long road to passkeys: When will they become mainstream?

Andrew Shikiar, CEO at FIDO Alliance discusses the various benefits of passkeys and the long-discussed goal of removing our password dependence.

Andrew Shikiar, CEO at FIDO Alliance discusses the various benefits of passkeys and the long-discussed goal of removing our password dependence.


Innovation & Tech Today: Minimize Risk and Fraud With New Technologies

Biometric authentication, advocated by FIDO, revolutionizes fraud prevention by replacing vulnerable passwords. With many users abandoning transactions due to forgotten passwords, biometrics offer secure verification through PINs, fingerprints, or facial […]

Biometric authentication, advocated by FIDO, revolutionizes fraud prevention by replacing vulnerable passwords. With many users abandoning transactions due to forgotten passwords, biometrics offer secure verification through PINs, fingerprints, or facial scans, mitigating cyber risks.


Cloudflare TV: Why security keys are the safest way to secure the web

Cloudflare CTO John Graham-Cumming joins FIDO Alliance’s Andrew Shikiar in a fireside chat to discuss the significance of hardware keys in combating online attacks like phishing, offering insights for businesses […]

Cloudflare CTO John Graham-Cumming joins FIDO Alliance’s Andrew Shikiar in a fireside chat to discuss the significance of hardware keys in combating online attacks like phishing, offering insights for businesses seeking to protect their employees.


Content Authenticity Initiative

March 2024 | This Month in Generative AI: Text-to-Movie

An update on recent breakthroughs in a category of techniques that generate images, audio, and video from a simple text prompt.

Adobe Stock

by Hany Farid, UC Berkeley Professor, CAI Advisor

News and trends shaping our understanding of generative AI technology and its applications.

Generative AI embodies a class of techniques for creating audio, image, or video content that mimics the human content creation process. Starting in 2018 and continuing through today, techniques to generate highly realistic content have continued their impressive trajectory. In this post, I will discuss some recent breakthroughs in a category of techniques that generate images, audio, and video from a simple text prompt.

Faces

A common computational technique for synthesizing images involves the use of a generative adversarial network (GAN). StyleGAN is, for example, one of the earliest successful systems for generating realistic human faces. When tasked with generating a face, the generator starts by laying down a random array of pixels and feeding this first guess to the discriminator. If the discriminator, equipped with a large database of real faces, can distinguish the generated image from the real faces, the discriminator provides this feedback to the generator. The generator then updates its initial guess and feeds this update to the discriminator in a second round. This process continues with the generator and discriminator competing in an adversarial game until an equilibrium is reached when the generator produces an image that the discriminator cannot distinguish from real faces.

Below are representative examples of GAN-generated faces. In two earlier posts, I discussed how photorealistic these faces are and some techniques for distinguishing real from GAN-generated faces.

Eight GAN-generated faces. (Credit: Hany Farid)

Text-to-image

Although they produce highly realistic results, GANs do not afford much control over the appearance or surroundings of the synthesized face. By comparison, text-to-image (or diffusion-based) synthesis affords more rendering control. Models are trained on billions of images that are  accompanied by descriptive captions, and each training image is progressively corrupted until only visual noise remains. The model then learns to denoise each image by reversing this corruption. This model can then be conditioned to generate an image that is semantically consistent with a text prompt like “Pope Francis in a white Balenciaga coat.” 

From Adobe Firefly to OpenAI's DALL-E, Midjourney to Stable Diffusion, text-to-image generation is capable of generating highly photorealistic images with increasingly fewer obvious visual artifacts (like hands with too many or too few fingers).

You probably saw on the news A.I. generations of Pope Francis wearing a white cozy jacket. I’d love to see your generations inspired by it.

Here’s a prompt by the original creator Guerrero Art (Pablo Xavier):

Catholic Pope Francis wearing Balenciaga puffy jacket in drill rap… pic.twitter.com/5WA2UTYG7b

— Kris Kashtanova (@icreatelife) March 28, 2023

Text-to-audio

In 2019, researchers were able to clone the voice of Joe Rogan from eight hours of voice recordings. Today, from only one minute of audio, anyone can clone any voice. What is most striking about this advance is that unlike the Rogan example, in which a model was trained to generate only Rogan's voice, today's zero-shot, multi-speaker text-to-speech can clone a voice not seen during training. Also striking is the easy access to these voice-cloning technologies through low-cost commercial or free open-source services. Once a voice is cloned, text-to-audio systems can convert any text input into a highly compelling audio clip that is difficult to distinguish from an authentic audio clip. Such fake clips are being used for everything from scams and fraud to election interference.

Text-to-video

A year ago, text-to-video systems tasked with creating short video clips from a text prompt like "Pope Francis walking in Times Square wearing a white Balanciaga coat" or "Will Smith eating spaghetti'' yielded videos of which nightmares are made. A typical video consists of 24 to 30 still images per second. Generating many realistic still images, however, is not enough to create a coherent video. These earlier systems struggled to create temporally coherent and physically plausible videos in which the inter-frame motion was convincing. 

However, just this month researchers from Google and OpenAI released a sneak peek into their latest efforts. While not perfect, the resulting videos are stunning in their realism and temporal consistency. One of the major breakthroughs in this work is the ability to generalize existing text-conditional image models to train on entire video sequences in which the characteristics of a full space-time video sequence can be learned.

In the same way that text-to-image models extend the range of what is possible as compared to GANs, these text-to-video models extend the ability to create realistic videos beyond existing lip-sync and face-swap models that are designed specifically to manipulate a video of a person talking.

Text-to-audio-to-video

Researchers from the Alibaba Group released an impressive new tool for generating a video of a person talking or singing. Unlike earlier lip-sync models, this technique requires only a single image as input, and the image is then fully animated to be consistent with any audio track. The results are remarkable, including a video of Mona Lisa reading a Shakespearean sonnet

When paired with text-to-audio, this technology can generate, from a single image, a video of a person saying (or singing) anything the creator wishes.

Looking ahead

I've come to learn not to make bold predictions about when and what will come next in the space of generative AI. I am, however, comfortable predicting that full-blown text-to-movie (combined audio and video) will soon be here, allowing for the generation of video clips from text such as: "A video of a couple walking down a busy New York City street with background traffic sounds as they sing Frank Sinatra's New York, New York." While there is much to be excited about on the content creation and creativity side, legitimate concerns persist and need to be addressed. 

While there are clear and compelling positive use cases of generative AI, we are already seeing troubling examples in the form of people creating non-consensual sexual imagery, scams and frauds, and disinformation

Some generative AI systems have been accused of infringing on the rights of creators whose content has been ingested into large training data sets. As we move forward, we need to find an equitable way to compensate creators and to give them the ability to opt in to or out of being part of training future generative AI models.

Relatedly, last summer saw a historic strike in Hollywood by writers and performers. A particularly contentious issue centered around the use (or not) of AI and how workers would be protected. The writers’ settlement requires that AI-generated material cannot be used to undermine a writer’s credit, and its use must be disclosed to writers. Protections for performers include that studios give fair compensation to performers for the use of digital replicas, and for the labor unions and studios to meet twice a year to assess developments and implications of generative AI. This latter agreement is particularly important given the pace of progress in this space.

Subscribe to the CAI newsletter to receive ecosystem news.

Stay connected and consider joining the movement to restore trust and transparency online.

Author bio: Professor Hany Farid is a world-renowned expert in the field of misinformation, disinformation, and digital forensics. He joined the Content Authenticity Initiative (CAI) as an advisor in June 2023. The CAI is an Adobe-led community of media and tech companies, NGOs, academics, and others working to promote adoption of the open industry standard for content authenticity and provenance.

Professor Farid teaches at the University of California, Berkeley, with a joint appointment in electrical engineering and computer sciences at the School of Information. He’s also a member of the Berkeley Artificial Intelligence Lab, Berkeley Institute for Data Science, Center for Innovation in Vision and Optics, Development Engineering Program, and Vision Science Program, and he’s a senior faculty advisor for the Center for Long-Term Cybersecurity. His research focuses on digital forensics, forensic science, misinformation, image analysis, and human perception.

He received his undergraduate degree in computer science and applied mathematics from the University of Rochester in 1989, his M.S. in computer science from SUNY Albany, and his Ph.D. in computer science from the University of Pennsylvania in 1997. Following a two-year post-doctoral fellowship in brain and cognitive sciences at MIT, he joined the faculty at Dartmouth College in 1999 where he remained until 2019.

Professor Farid is the recipient of an Alfred P. Sloan Fellowship and a John Simon Guggenheim Fellowship, and he’s a fellow of the National Academy of Inventors.


Origin Trail

Trusted AI for next generation RWAs with OriginTrail and Chainlink

We are witnessing an important convergence of technologies of Artificial Intelligence (AI), Internet, and Crypto promising to reshape our digital landscape. This convergence enables a Verifiable Internet for AI, unlocking AI solutions without hallucinations and ensuring full respect for data ownership and Intellectual Property rights. Trillion-dollar industries spanning from tokenization of

We are witnessing an important convergence of technologies of Artificial Intelligence (AI), Internet, and Crypto promising to reshape our digital landscape. This convergence enables a Verifiable Internet for AI, unlocking AI solutions without hallucinations and ensuring full respect for data ownership and Intellectual Property rights.

Trillion-dollar industries spanning from tokenization of real world assets (RWAs), supply chains, metaverse, construction, life sciences and healthcare, among others, require AI systems to use verifiable information to deliver the multiplication effect to the benefit of users in a safe manner.

A modular and collaborative approach is necessary to achieve that. OriginTrail and Chainlink are working together to bring the vision of the Verifiable Internet for AI to reality, allowing the transformation of real world asset (RWA) tokenization.

OriginTrail Decentralized Knowledge Graph (DKG) is already powering trusted AI solutions across multiple RWA industries, such as protecting whisky authenticity with trusted AI and DNA tagging techniques, helping Swiss Federal Railways increase rail travel safety with cross-border rail operator connectivity, increasing EU build environment sustainability with trusted AI, supporting representatives of over 40% of US imports to safeguard data on security audits for overseas factories, etc.

Expanding the OriginTrail decentralized AI framework with Chainlink oracle capability further extends the strength of RWA solutions by giving them access to real-time real world data. By synergizing the power of the Decentralized Knowledge Graph and Chainlink Data feeds, the capabilities of AI to retrieve verifiable information on RWAs can be applied across any domain.

Integrating Chainlink Data Feeds with OriginTrail DKG to create a Trusted AI solution

Each knowledge resource on the OriginTrail DKG is created as a Knowledge Asset consisting of knowledge content, cryptographic proofs for immutability, and NFT for ownership. For our example, we will create a Knowledge Asset for Chainlink Data Feed. Once created, Knowledge Assets can be used in decentralized Retrieval-Augmented Generation (dRAG) AI applications. For our showcase, we will use an existing DOT/USD data feed in an AI application using the DKG and dRAG in 3 simple steps.

Step 1: Create a Knowledge Asset on the DKG

Since we wish to retrieve DOT/USD data feed in our AI application, we need to start by creating a Knowledge Asset linking to the data feed which we will use to retrieve the live price:

{
"@context": "http://schema.org/",
"@type": "ChainlinkDataFeed",
"@id": "https://data.chain.link/feeds/moonbeam/mainnet/dot-usd",
"description": "Chainlink DataFeed providing real-time DOT/USD price information on the Moonbeam network.",
"baseAsset": {
"@id": "urn:chainlink:base-asset:dot",
"@type": "ChainlinkBaseAsset",
"name": "DOT_CR",
"description": "Polkadot cryptocurrency (DOT)"
},
"quoteAsset": {
"@id": "urn:chainlink:quote-asset:usd",
"@type": "ChainlinkQuoteAsset",
"name": "USD_FX",
"description": "United States Dollar (USD)"
},
"productType": "price",
"productSubType": "reference",
"productName": "DOT/USD-RefPrice-DF-Moonbeam-001",
"contractAddress": "0x1466b4bD0C4B6B8e1164991909961e0EE6a66d8c",
"network": "moonbeam",
"rpcProvider": "https://rpc.api.moonbeam.network"
}

The main entities represented in the Knowledge Asset are:

Base asset (DOT_CR) — the first asset listed in a trading pair, this is the asset that is being priced Quote asset (USD_FX) — the second asset listed in a trading pair, this is the currency the base asset is priced in

Necessary fields for DOT/USD value retrieval:

Contract address (contractAddress) RPC Provider (rpcProvider)

This Knowledge Asset content can also be visualized in the DKG Explorer:

Step 2: Use AI to query the DKG

From your application, you can use an LLM to generate DKG queries based on the user prompts. This step can have different degrees of complexity, so for this showcase, we will use the selected LLM to:

Determine if the prompt is relevant for the data feed (in our case, is the user’s question mentioning DOT token) Use the LLM to structure a SPARQL query for the OriginTrail DKG to retrieve the Data Feed URL

An example of an engineered prompt to determine the relevance of the question for DOT token:

Given that the chatbot primarily responds to inquiries about the Polkadot ecosystem, including its native token, DOT, analyze the provided question to determine if there's a direct or indirect reference to DOT. Provide a response indicating 'true' if the question pertains to the value, function, or any aspect of DOT, within the context of discussions related to Polkadot ecosystem, either explicitly or implicitly, and 'false' if it does not. Question: {question}

If the above prompt determines the question as relevant (returns true), we proceed with a SPARQL query for the OriginTrail Decentralized Knowledge Graph. There are various techniques to obtain a SPARQL query with the LLM you’re using. In our case, we seek ChainlinkDataFeed type entities (Knowledge Assets) with DOT as the BaseAsset. The query result in our case will be a single Knowledge Asset containing information about the DOT/USD value. The SPARQL query should look like this:

PREFIX schema: <http://schema.org/>
SELECT ?dataFeed ?contractAddress ?rpcProvider
WHERE {
?dataFeed a schema:ChainlinkDataFeed ;
schema:baseAsset ?baseAsset ;
schema:contractAddress ?contractAddress ;
schema:rpcProvider ?rpcProvider .
?baseAsset a schema:ChainlinkBaseAsset ;
schema:name "DOT_CR" .
} Step 3: Retrieve the data and display it in your application

Retrieve all the necessary information from the Knowledge Assets obtained through the SPARQL query. Essential information includes the contract address and RPC endpoint, as they are required to execute the code fetching price information from Chainlink. In our case, we are fetching the DOT/USD price.

Code execution

The following code uses ethers.js to fetch the requested value from the retrieved Data Feed. Here’s a simple example:

const { ethers } = require('ethers');

//Your code that executes SPARQL queries.

const rpcProvider = sparqlResult.data[0].rpcProvider;
const contractAddress = sparqlResult.data[0].contractAddress;

const provider = new ethers.providers.JsonRpcProvider(rpcProvider);
const abi = [{
inputs: [],
name: "latestRoundData",
outputs: [
{ internalType: "uint80", name: "roundId", type: "uint80" },
{ internalType: "int256", name: "answer", type: "int256" },
{ internalType: "uint256", name: "startedAt", type: "uint256" },
{ internalType: "uint256", name: "updatedAt", type: "uint256" },
{ internalType: "uint80", name: "answeredInRound", type: "uint80" },
],
stateMutability: "view",
type: "function",
}
];
async function getDOTUSDPrice() {
const contract = new ethers.Contract(contractAddress, abi, provider);
const [ , price] = await contract.latestRoundData();

console.log(`DOT/USD Price: ${ethers.utils.formatUnits(price, 8)}`);
}
getDOTUSDPrice(); Include Chainlink Data feed into the final response

You can modify how the LLM will perform the decentralized Retrieval Augmented Generation and include the data feed as a part of the response by engineering the prompt based on your requirements. Here’s one example that appends it at the end of the generated response.

The next generation RWA solutions will be using the best of what the Internet, Crypto and AI have to offer. Combining the power of OriginTrail DKG and Chainlink unlocks avenues of value in the RWA industries that can disrupt the way those industries operate today. Your path to disrupting the trillion dollar industries can start with the 3 steps shown above. Join us in Discord to let us know how OriginTrail and Chainlink can boost your solution with trusted AI.

Trusted AI for next generation RWAs with OriginTrail and Chainlink was originally published in OriginTrail on Medium, where people are continuing the conversation by highlighting and responding to this story.

Thursday, 28. March 2024

Origin Trail

Announcing the ID Theory DeSci IPO (Initial Paranet Offering)

By OriginTrail and ID Theory AI-ready knowledge foundation for the future of scientific research The combination of AI (artificial intelligence) and DeSci (decentralised science) is poised to create a huge leap in humanity’s capacity to drive scientific research. While it will not happen overnight, technological maturity allows us to take critically important steps today that will create a

By OriginTrail and ID Theory

AI-ready knowledge foundation for the future of scientific research

The combination of AI (artificial intelligence) and DeSci (decentralised science) is poised to create a huge leap in humanity’s capacity to drive scientific research. While it will not happen overnight, technological maturity allows us to take critically important steps today that will create a better future for everyone tomorrow.

ID Theory has been at the forefront of this intersection for quite some time now, with their CIO on the board of Molecule and the fund as founding members of BeakerDAO.

“Whilst we have enjoyed helping shape the future through exciting conversations, musings, and capital, it is now time for us, alongside the pioneers at OriginTrail, to help make this future a practical reality. Sometimes, you have to roll up your sleeves and personally shape the future you want to see.” — ID Theory

As a major step towards this future, ID Theory will be among the first to leverage the Decentralized Knowledge Graph Paranets to build out the AI-ready knowledge foundation for DeSci. The DeSci paranet will launch as a collaborative community mining relevant DeSci knowledge. Knowledge miners contributing knowledge to the Paranet will be mining NEURO rewards.

The Decentralized Knowledge Graph and the DeSci Paranet

The DeSci Paranet will live on the OriginTrail Decentralized Knowledge Graph (DKG), a permissionless peer-to-peer network which will ensure that all the DeSci knowledge published to the DeSci Paranet will be discoverable, verifiable, and attributed to the owners who will mine it. This way, our AI services will omit the challenges of hallucination, have managed bias, and always respect the intellectual property of the knowledge owners.

As part of DeSci paranet, ID Theory will run designated AI services, allowing users to interact with the mined knowledge. The first AI service will be a DeScAI chatbot allowing you to explore the knowledge in DeSci Paranet using the decentralized retrieval-augmented generation (dRAG) method and, in the future, these services will evolve into more end-to-end research frameworks for autonomous agents to make scientific breakthroughs!

To explore more about the technical design of paranets, DKG and dRAG we recommend diving into the OriginTrail Whitepaper.

Calling all Knowledge Miners to Arms

In a short while, a full proposal for the DeSci Paranet will be put to the NeuroWeb community for approval. The proposal will include:

A showcase of the genesis knowledge assets that will be created The incentives model for knowledge miners The AI service demo for DeScAI

As a part of the creation of the DeSci paranet, OriginTrail and ID Theory are calling for future DeSci knowledge miners to get involved. As part of their Blueprint for Breakthroughs, ID Theory identified several Decentralised Autonomous Organisations (DAOs) that are focused on gathering and creating relevant knowledge such as Vita, Valley, Athena, Hair, Cerebrum, and Cryo. The paranet incentives are inclusive, and we invite all interested participants to get involved.

Towards Autonomous Research

The DeSci paranet is aimed at supporting the autonomous research vision, and delivers following critical elements for its success:

The data verifiability and ownership capabilities ensured by NeuroWeb blockchain The symbolic AI capabilities ensured by the OriginTrail DKG The neural AI capabilities of Generative AI like Large Language Models (LLMs) The incentives for relevant knowledge growth on NeuroWeb

Once we combine the trust and incentives of NeuroWeb, the deterministic foundation of the DKG, and the reasoning potential of LLMs we can create not only specific AI solutions but also wider research tasks for AI agents which can take these building blocks and conduct autonomous research on verifiable sources.

About ID Theory

ID Theory is a liquid and venture focused crypto fund investing in the next trillion users over three main verticals:

Decentralised AI: autonomous agents will rule the world. Decentralised Finance: trust code not bankers. Decentralised Science: every disease is curable.

Decentralisation is the guiding principle for all investments — providing a trustless foundation for humans and AI agents to thrive.

See you at the bleeding edge.

About OriginTrail

OriginTrail is an ecosystem-building decentralized knowledge infrastructure for artificial intelligence (AI). With the mission of tackling misinformation, which is exacerbated with AI adoption, OriginTrail enables verifiably tracking origins of information, discoverability, and integrity of knowledge to enable trusted AI. It has various applications in the domains of real-world assets (RWAs), search and recommendation engines, question-answering systems, and generally knowledge-dependent applications (such as AI systems).

OriginTrail is creating a Verifiable Web for decentralized AI by empowering world-class brands and builders. It utilizes its unique Decentralized Knowledge Graph and OriginTrail Parachain to deliver AI-powered search and solutions for enterprises and individuals worldwide.

Web | X | Facebook | Telegram | LinkedIn | GitHubDiscord

Announcing the ID Theory DeSci IPO (Initial Paranet Offering) was originally published in OriginTrail on Medium, where people are continuing the conversation by highlighting and responding to this story.


The Initial Paranet Offerings (IPOs) to supercharge the Verifiable Internet for AI

Access to shared open knowledge constructed in a collaborative way is mission-critical for the future of AI, especially since non-AI-generated content is expected to be surpassed in size by synthetic, AI-generated content in the coming period. The importance of it has also been highlighted by the Turing Award winner in the field of Deep Learning, Yann LeCun: “The way you train that (AI) syst

Access to shared open knowledge constructed in a collaborative way is mission-critical for the future of AI, especially since non-AI-generated content is expected to be surpassed in size by synthetic, AI-generated content in the coming period. The importance of it has also been highlighted by the Turing Award winner in the field of Deep Learning, Yann LeCun:

“The way you train that (AI) system will have to be crowdsourced … if you want it to be a repository of all human knowledge, all humans need to contribute to it.” Yann LeCun

To achieve that, AI para-networks or paranets, the autonomously operated collections of Knowledge Assets owned by its communities and residing on the OriginTrail Decentralized Knowledge Graph (DKG), were introduced in the Whitepaper 3.0.

Initial Paranet Offerings (IPO) are now introduced as a means of a public launch of a paranet, with a collection of Knowledge Assets and accompanying incentivization structure proposed and voted upon via the NeuroWeb governance mechanism. Each IPO is structured as an initial proposal and an initial set of Knowledge Assets published, along with an incentivization structure set forth by an IPO operator that proposes how the incentives will be split across three groups:

IPO operator Knowledge miners Neuro holders that participated in supporting the creation of an IPO and approved the requested allocation of Neuro utility tokens for an IPO’s knowledge mining.

The success of an IPO largely depends on the IPO’s operator ability to wisely propose the incentive structure, taking into consideration the following factors among others:

IPO operator autonomously selects AI services to be used to drive value of a knowledge base, and must undertake an economically and commercially viable approach for both creation and maintenance of a paranet. It is expected that an IPO operator proposes an operator fee that renders the birth of a paranet economically viable (earning a share of allocated emissions), while also setting up a fee structure for both knowledge miners and NEURO holders that partake in voting. Assuming the cost of mining Knowledge Assets on the DKG in TRAC utility tokens, knowledge miners are to be considered as central to the success of not only an IPO proposal, but even more so as entities that drive incentives in NEURO tokens only as each new knowledge asset is mined. Only when new Knowledge Assets are mined, the allocated emissions of NEURO are executed across the three groups as incentives. When launching an IPO, the paranet operator will define the ratio of NEURO to be earned per TRAC spent to mine each Knowledge Asset. An IPO operator may set the ratio autonomously to target a desired profitability before the proposal is submitted to voting, yet attempts of price gouging might not receive support from NEURO holders. NEURO holders that support an IPO via governance voting are to lock up tokens for the duration of NEURO emission allocated for the IPO. Though the share of emissions allocated for an IPO is an important factor for NEURO holders’ decision, the duration of the “lock period” can also play an important role. The paranet operator also defines what portion of paranet incentives will be shared with NEURO holders supporting the proposal. The ecosystem incentivizing the Verifiable Internet for AI

The interest to launch the first IPOs has already been pre-registered by several institutional entities and builders, with the inaugural batch nearing the announcement stage. If you are interested in launching a paranet and knowledge mining, hop into the community discussion in Discord and share your ideas.

The Initial Paranet Offerings (IPOs) to supercharge the Verifiable Internet for AI was originally published in OriginTrail on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wednesday, 27. March 2024

DIF Blog

Effective governance now with DIF Credential Trust Establishment

In the digital identity space, the Trust Establishment (TE) and Credential Trust Establishment (CTE) specifications play crucial roles in defining how trust is established and managed. CTE, in particular, is gaining traction as we approach the Internet Identity Workshop (IIW), with a plan to advance it to formal V1 status.

In the digital identity space, the Trust Establishment (TE) and Credential Trust Establishment (CTE) specifications play crucial roles in defining how trust is established and managed. CTE, in particular, is gaining traction as we approach the Internet Identity Workshop (IIW), with a plan to advance it to formal V1 status. This article focuses on the CTE, shedding light on its key features that make it a game-changer in building trust within digital credentials.

Core Aspects of CTE

CTE builds upon TE by enabling ecosystems to express their trust in the issuers of decentralized identifiers (DIDs) and credentials. Credential validation steps of checking the integrity and revocation status are well known and understood, but there are not yet commonly-agreed-upon standards for evaluating the authority of a party to issue a credential’s claims. 

Existing approaches have fallen short in one or more of the following areas: 

Ensuring the approach is sufficiently adaptable Ability to express authorization for a specific role (not just general authorization) Allows good performance and minimal resources, even eligible for offline use Low-cost to implement, deploy, and use

This is where CTE comes in: enabling ecosystems to express the credibility of participants, but in a way that meets the above needs. By doing so, it helps avoid “rent-seeking” behavior, in which an ecosystem participant tries to position themselves to collect transaction fees or similar.

Authority in the Ecosystem

CTE is non-prescriptive in its stance on defining who is an authority. It operates on the principle that authority is determined by an ecosystem’s existing trust structure, informing the acceptance and recognition of the credentials. This flexibility allows for wide adoption and adaptation, making it a practical solution for managing trust.

Governance and Flexibility

CTE introduces a practical governance model that is lightweight and adaptable. It serves ecosystems both large and small. It specifies roles such as credential issuance and verification, and allows grouping by schemas, or type of credential. This allows CTE to adapt well to a wide variety of use cases and simplifies the process of determining who is authorized to issue or verify credentials.

Trust on Demand

CTE includes flexible dials in cases where more fluidity is required. For example, instead of being statically included in the registry, an individual can hold credential(s) that assigns them a specific role, and the root authority of that credential corresponds to an entry/role in the registry.   This method is not only efficient for offline use but also broadens the compatibility with different protocols, enhancing the flexibility and utility of the trust establishment process.

Impact

CTE is designed to counter rent-seeking behaviors and establish a solid trust foundation in digital credentials. It enables organizations and individuals to easily verify the legitimacy of credentials, providing a clear pathway for recognizing valuable credentials for professional development, for example. The specification’s governance model is straightforward and requires minimal technical investment, making it accessible and implementable across various industries.

How it can be used

In the wild, CTE files would be used by software representing companies and people. Companies and people will have a collection of governance files they use for different industries and purposes. In general, companies will be interested in software providing an immediate yes or no answer informing whether to accept or reject a credential. For individuals, however, software can use CTE files to advise on whether a credential is recognized by different parties. By indexing different CTE files, software can help individuals decide which ecosystems and credentials are most valuable for them.

Future Directions

As CTE heads towards v1, its potential to streamline the verification process and enhance the credibility of digital credentials is becoming increasingly apparent. DIF invites you to learn more about how CTE can revolutionize the digital identity field in providing a scalable, flexible, and trustworthy framework for managing digital credentials.

Learn more at:

Internet Identity Workshop DIF virtual event (details coming soon)

In summary, CTE is not just about establishing trust; it's about making the process more accessible, adaptable, and reliable for everyone involved in the digital identity ecosystem. Its forward-thinking approach to governance, authority, and risk mitigation positions it as a cornerstone specification in the evolving landscape of digital credentials.


GS1

Maintenance release 2.9

Maintenance release 2.9 daniela.duarte… Wed, 03/27/2024 - 16:20 Maintenance release 2.9
Maintenance release 2.9 daniela.duarte… Wed, 03/27/2024 - 16:20 Maintenance release 2.9

GS1 GDM SMG voted to implement the 2.9 standard into production in February 2024.

Key Milestones:

See GS1 GDM Release Schedule

As content for this release is developed it will be posted to this webpage followed by an announcement to the community to ensure visibility.
GDSN Data Pools should contact the GS1 GDSN Data Pool Helpdesk to understand the plan for the update. Trading Partners should work with their Data Pools (if using GDSN) and/or Member Organisations on understanding the release and any impacts to business processes.

GDM 2.9 contains updated reference material aligned with ADB 2.3 and GDSN 3.1.26.

 

Updated For Maintenance Release 2.9

GDM Standard 2.9 (February 2024)

Local Layers For Maintenance Release 2.9

China - GSMP RATIFIED (April 2022)

France - GSMP RATIFIED (November 2023)

Germany - GSMP RATIFIED (November 2023)

Poland - GSMP RATIFIED (November 2023)

Romania - GSMP RATIFIED (December 2021)

USA - GSMP RATIFIED (February 2023)

Finland - GSMP RATIFIED (November 2023)

 

Release Guidance

GDM Market Stages Guideline (June 2023)

GDM Attribute Implementation Guideline (February 2024)

GPC Bricks To GDM (Sub-) Category Mapping (March 2024)

Attribute Definitions for Business (February 2024)

GDM (Sub-) Categories (October 2021)

GDM Regions and Countries (17 December 2021)

GDSN Release 3.1.26 (February 2024)

Tools

GDM Navigator on the Web 

GS1 GDM Attribute Analysis Tool (Nov 2023)

GDM Local Layer Submission Template (May 2023)

Training

E-Learning Course

Any questions?

We can help you get started using GS1 standards.

Contact your local office


EdgeSecure

Edge Partners with FABRIC, Princeton University, and Rutgers, The State University of New Jersey, on High Performance Network Infrastructure

The post Edge Partners with FABRIC, Princeton University, and Rutgers, The State University of New Jersey, on High Performance Network Infrastructure appeared first on NJEdge Inc.

NEWARK, NJ, March 27, 2024 –Edge recently partnered with FABRIC, Rutgers, The State University of New Jersey, and Princeton University to provide high performance network infrastructure connecting university researchers and their local compute clusters and scientific instruments to the larger FABRIC infrastructure. 

Notes Dr. Forough Ghahramani, Assistant Vice President for Research, Innovation, and Sponsored Programs, Edge, “The partnership with the FABRIC team and researchers at Princeton University and Rutgers will create opportunities to explore innovative solutions not previously possible for a large variety of high-end science applications and provide a platform on which to educate and train the next generation of researchers on future advanced distributed system designs.”

FABRIC is an international infrastructure that enables cutting-edge experimentation and research at-scale in the areas of networking, cybersecurity, distributed computing, storage, virtual reality, 5G, machine learning, and science applications. Funded by the National Science Foundation’s (NSF’s) Mid-Scale Research Infrastructure program, FABRIC enables computer science and networking researchers to develop and test innovative architectures that could yield a faster, more secure Internet. 

“EdgeNet is uniquely well-positioned to provide infrastructure support to these types of research networking initiatives,” explains Bruce Tyrrell, Associate Vice President, Programs & Services, Edge. Continues Tyrrell, “As a backbone and external services provider to both Rutgers and Princeton University, Edge has the capacity and capability to meet the high bandwidth research needs of our partner institutions. Our extensive optical backbone enables Edge to efficiently and economically deploy 100Gb transport services to all of our members.”    

The FABRIC team is led by researchers from University of North Carolina at Chapel Hill, University of Kentucky, Clemson University, University of Illinois, and the Department of Energy’s ESnet (Energy Sciences Network). The team also includes researchers from many other universities, including Rutgers and Princeton University, to help test the design of the facility and integrate their computing facilities, testbeds, and instruments into FABRIC.

“The partnership with the FABRIC team and researchers at Princeton University and Rutgers will create opportunities to explore innovative solutions not previously possible for a large variety of high-end science applications and provide a platform on which to educate and train the next generation of researchers on future advanced distributed system designs.”

— Dr. Forough Ghahramani
Assistant Vice President for Research, Innovation, and Sponsored Programs, Edge

“FABRIC aims to be an infrastructure to explore impactful new ideas that are impossible or impractical with the current Internet. It provides an experimental sandbox that is connected to the globally distributed testbeds, scientific instruments, computing centers, data, and campuses that researchers rely on everyday,” said Paul Ruth, FABRIC Lead PI. “Edge enables us to support research across many facilities including the COSMOS wireless testbed, Princeton’s experimental P4 testbed, and remotely controlled instruments such as a CyroEM microscope at Rutgers.”

“The integration of FABRIC with COSMOS, both being pivotal national testbeds, opens unparalleled avenues for experimentation that blend wired and wireless networking with edge computing. Supported by Edge’s provision of connectivity between these pivotal national testbeds as well as to other national and international networks in NYC and Philadelphia carrier hotels, it opens unparalleled avenues for experimentation that blend wired and wireless networking with edge computing. This synergy not only enhances our research capabilities but also paves the way for groundbreaking advancements in network infrastructure and distributed systems,” notes Ivan Seskar, Chief Technologist at WINLAB, Rutgers, emphasizing the importance of collaborative efforts in pushing the boundaries of networking and computing research.

“As a backbone and external services provider to both Rutgers and Princeton University, Edge has the capacity and capability to meet the high bandwidth research needs of our partner institutions. Our extensive optical backbone enables Edge to efficiently and economically deploy 100Gb transport services to all of our members.”

— Bruce Tyrell
Associate Vice President, Programs & Services, Edge

Princeton University Provost and Gordon Y.S. Wu Professor in Engineering and Computer Science, Dr. Jennifer Rexford, was an early supporter of bringing FABRIC to Princeton, serving as a founding member of the project’s steering committee. Shares Rexford, “Linking into FABRIC allows Princeton to support science on a global scale, across multiple domains and enables researchers to reinvent the internet by experimenting with novel networking ideas in a realistic setting — at tremendous speed, scope and scale.” Further elaborates Jack Brassil, Ph.D., Senior Director of Advanced CyberInfrastructure, Office of the Vice President for Information Technology, and Senior Research Scholar Department of Computer Science, Princeton University, “FABRIC enables the Princeton University campus to usher in a new generation of terabit per second networking applications.By connecting our faculty to experimental testbeds, scientific instruments, and research collaborators at other higher education institutions, FABRIC will provide a fast path to scientific discovery.”

To learn more about FABRIC capabilities, visit https://whatisfabric.net/. Contact Forough Ghahramani (research@njeged.net) for additional information. 

About Edge

Edge serves as a member-owned, nonprofit provider of high performance optical fiber networking and internetworking, Internet2, and a vast array of best-in-class technology solutions for cybersecurity, educational technologies, cloud computing, and professional managed services. Edge provides these solutions to colleges and universities, K-12 school districts, government entities, hospital networks and nonprofit business entities as part of a membership-based consortium. Edge’s membership spans the northeast, along with a growing list of EdgeMarket participants nationwide. Edge’s common good mission ensures success by empowering members for digital transformation with affordable, reliable and thought-leading purpose-built, advanced connectivity, technologies and services.

The post Edge Partners with FABRIC, Princeton University, and Rutgers, The State University of New Jersey, on High Performance Network Infrastructure appeared first on NJEdge Inc.


We Are Open co-op

Towards a manifesto for Open Recognition

Advocating for a more diverse future for the recognition of talents, skills, and aspirations CC BY-ND Visual Thinkery for WAO Back in 2016, the Open Recognition Alliance created the Bologna Open Recogition Declaration (BORD). This has helped the community organise around principles relating to the concept of Open Recognition for all. It emphasises the importance of building out technologies a
Advocating for a more diverse future for the recognition of talents, skills, and aspirations CC BY-ND Visual Thinkery for WAO

Back in 2016, the Open Recognition Alliance created the Bologna Open Recogition Declaration (BORD). This has helped the community organise around principles relating to the concept of Open Recognition for all. It emphasises the importance of building out technologies and infrastructure to enable Open Recognition, as well as advocating for policies which foster its development.

Eight years later, the Open Recognition is for Everybody (ORE) community has started work on a manifesto for Open Recognition. This will be part of the Open Recognition Toolkit and extends the BORD to help people envision and advocate for a future where Open Recognition is commonplace.

Unpacking Open Recognition

Let’s begin with defining terms:

Open Recognition is the awareness and appreciation of talents, skills and aspirations in ways that go beyond credentialing. This includes recognising the rights of individuals, communities, and territories to apply their own labels and definitions. Their frameworks may be emergent and/or implicit.
(What is Open Recognition, anyway?)

We want to help people understand that traditional approaches to credentialing, while important for unlocking opportunities, are just one part of a wider recognition landscape.

Image CC BY-ND Visual Thinkery for WAO

For example, you could think of traditional credentialing — with courses, modules, and diplomas as like a greenhouse where growth conditions are carefully controlled. Only certain plants thrive in this environment, and they are pre-selected to do so.

Open Recognition, on the other hand, is more like the garden that surrounds the greenhouse where a diverse array of plants grow naturally, adapt to their environment, and flourish in unique ways. Not only that, but there are many different gardens with different types of soil and varying atmospheric conditions.

Getting started with a manifesto

A manifesto is a call to action. It’s a way of allowing people to sign up to implement specific principles in order to work towards a better future.

To get started on that road, in a recent ORE community call we asked two questions:

What sucks that we want to do the opposite of? What doesn’t exist that we want to bring into being?

While these are only our first steps towards a manifesto with a subset of the community, we’re keen to share what we’ve discussed so far.

What sucks? Simplifying complex systems — our digital landscape is cluttered with overly complex technologies and terminology. We aim to streamline these technologies, making open recognition accessible to everyone, not just the tech-savvy. Clearing confusion and enhancing communication — there’s a tendency to overlook past contributions in the field, creating a cycle where new initiatives ignore the groundwork laid by predecessors. We want to provide clear, accurate information about Open Recognition to varied audiences. Dismantling exclusivity — some forms of recognition and credentials are guarded as if they’re an exclusive membership available only to a select few. It’s important that we break down these barriers to create a more inclusive environment where everyone’s achievements are acknowledged. What doesn’t exist? Streamlined badge creation — we want to make creating badges for Open Recognition as easy as filling out a social media profile. This would encourage wider adoption and creativity in badge design/issuing. Stories of success —examples and case studies help guide and inspire others. This could be part of the Open Recognition Toolkit, allowing stories to be shared and help provide practical and conceptual guidance to others. Bridging spheres of learning — different forms of learning, for example formal and informal, tend to be siloed. As we know valuable skills can be acquired outside of traditional educational settings, we want to build a bridge to recognise the worth of both formal training and self-taught expertise. Next steps

Creating a manifesto for Open Recognition involves creating something that resonates with a broad audience. It needs to be informative and upbeat, and have an ideological stance which advocates for a better future world.

Our next community call will continue the work we started this week, helping us work towards a plausible utopia for Open Recognition. If this is something which resonates with you, and you’d like to get involved, join us!

Related posts How badges can change the world — Part 1: The Two Loops Model for Open Recognition advocacy How badges can change the world — Part 2: Why we need to transition Advocating for learner-centric badge systems: Some thoughts on campaigning for the right things

Towards a manifesto for Open Recognition was originally published in We Are Open Co-op on Medium, where people are continuing the conversation by highlighting and responding to this story.


Blockchain Commons

Foremembrance Day 2024 Presentation

For Foremembrance Day 2024, Christopher Allen gave a Twitter Livestream discussing the tragedy of overidentification in The Netherlands in WWII, how France offered a different path, and how we must continue to be wary about what identity information we collect and distribute today. For more see the slides of this presentation, the original article “Echoes from History” and a discussion of modern th

For Foremembrance Day 2024, Christopher Allen gave a Twitter Livestream discussing the tragedy of overidentification in The Netherlands in WWII, how France offered a different path, and how we must continue to be wary about what identity information we collect and distribute today.

For more see the slides of this presentation, the original article “Echoes from History” and a discussion of modern threats in “The Dangers of eIDAS”.

Tuesday, 26. March 2024

FIDO Alliance

Recap: Virtual Summit: Demystifying Passkey Implementations

By: FIDO staff Passkeys hold the promise of enabling simpler, strong authentication. But first organizations, governments and individuals will have to adopt the technology – and some of them have […]

By: FIDO staff

Passkeys hold the promise of enabling simpler, strong authentication. But first organizations, governments and individuals will have to adopt the technology – and some of them have questions.

At the Authenticate Virtual Summit: Demystifying Passkey Implementation on March 13, speakers from the FIDO Alliance, Intercede, IDEMIA, Yubico, Dashlane and 1Password as well as implementers including Amazon and Target, presented on their experiences implementing and working with passkeys. The virtual summit covered the technical perspective on passkeys from the FIDO Alliance, as well as use cases for passkeys in the enterprise, consumer authentication, and the U.S. government. Along the way, attendees asked lots of questions and got lots of insightful answers.

Fundamentally a key theme that resonated throughout the virtual summit was that passkeys are a password replacement – and it’s a replacement that can’t come soon enough.

“Passwords are still the primary way for logging on and they are still easily phished through social engineering and they tend to be very difficult to use and to maintain,” David Turner, senior director of standards development at the FIDO Alliance said. “The consequences are real and the impact is real to the world at large.”

Passkeys 101

During his session, Turner provided a high-level overview on what passkeys are and how they work.

Passkeys build upon existing FIDO authentication protocols and simplify the user experience. 

Passkeys can now be synchronized across devices through the use of passkey providers, removing the need for separate credentials on each device. Passkeys also enable new capabilities like cross-device authentication. Turner demonstrated how a QR code scanned on one device can securely connect to credentials stored on another nearby device. 

In addition to synced passkeys there are also device-bound passkeys, that rely on technologies like a security key to provide the required credentials.

The State of Passkeys

The current and future state of passkey adoption was the topic tackled by

Andrew Shikiar, executive director and CEO of the FIDO Alliance.

There are now hundreds of services, including the major platform vendors Microsoft, Apple and Google, representing billions of users, that support passkeys at this point in 2024.

“If you are a service provider and you wish to deploy passkeys, you can do so with high confidence that your consumers will be able to leverage them,” he said.

The FIDO Alliance aims to drive passkey support over the coming years, in part by sharing best practices and success stories, which is a core part of what the virtual summit was all about.

Usability was emphasized as a key factor for widespread adoption. 

“Usability is paramount. It must be front and center in what you do,” said Shikiar. 

The FIDO Alliance has released user experience guidelines and a design system to help companies implement passkeys in a user-friendly way. Future guidelines will address additional use cases.

Shikiar emphasized that passkeys are not about being a new addition to improve the security of passwords. His expectation is that passkeys will be seen as a true password replacement rather than just an attempt at bolstering existing authentication methods. He emphasized that the fundamental problem is passwords, and the goal should be replacing them, not just adding extra security layers on top of passwords. Shikiar wants people to stop thinking about multi-factor authentication factors and instead think about enabling phishing resistant identities. 

Passkeys are on Target at Target

Passkeys are already in use at retail giant Target, helping to improve security and optimize authentication for its employees. 

Tom Sheffield, senior director cybersecurity at Target, said that the company has been leveraging FIDO for workforce authentication since 2018 and adopted it as a primary authenticator in 2021.

One of the ways that Target has been able to more easily enable passkey support across its platforms is via Single Sign On (SSO). 

“We have a very robust SSO environment across our web application suite,” Sheffield said. “So for us, that made it very easy to integrate FIDO into the SSO platform, and then therefore every application behind SSO automatically got the benefit of it.”

In terms of how Target was able to get its users to adopt passkeys quickly, Sheffield said that the option was communicated to users in the login flow, rather than trying to explain to users what they should do in an email.

Overall, Sheffield emphasized that if an organization is using OTP (one time passwords) today for multi-factor authentication (MFA), any form of FIDO will provide significantly better user experience and security. 

“There have not been many security programs that I’ve been part of in my 25-year career in this space that offer you security and user experience simultaneously,” he said. “So if you’re using anything other than FIDO you’ve got a great opportunity to up your game and provide a great experience for users which should make you a hero.”

Authenticating a Billion Customers with Passkeys at Amazon

Among the biggest consumer-facing websites that supports passkeys today is online giant Amazon.

Yash Patodia, senior manager of product management at Amazon, detailed how passkeys were rolled out to hundreds of millions of consumers worldwide. Patodia explained Amazon’s motivation noting that passwords are relatively easy for a bad actor to crack. He noted that passkeys help customers to authenticate more easily than other methods with a better user experience. 

Amazon implemented passkeys using different APIs for web, iOS, and Android platforms. Now available across devices, Amazon’s goal is to drive awareness and increase passkey adoption among its customer base over the next year. In his view, passkeys are well suited for mass adoption and early indications from Amazon’s user base are very encouraging.

“If you’re a consumer facing company who has a big customer base, definitely explore this option,” he said.

Considerations for FIDO and Passkeys in the US Government 

The U.S. Government is no stranger to the world of strong authentication, with many staffers already using PIV (Personal Identity Verification) smart card credentials. 

Teresa Wu from IDEMIA and Joe Scalone from Yubico, who both serve on the FIDO Alliance’s Government Deployment Working Group (GDWG), provided an overview of how passkeys can complement PIV credentials and support a zero trust security model. 

As government agencies work to implement phishing-resistant multi-factor authentication, passkeys are an option that could provide a more seamless user experience than one-time passwords or hardware tokens. 

“We are not here to replace PIV, we are here to supplement and use FIDO where PIV is not covered,” said Wu. 

One area they see opportunities for FIDO is for federal contractors and employees who are not eligible for a PIV card due to their job functions. Currently these individuals rely on passwords for system access.

State of Passkey Portability Set to Improve

A critical aspect of user experience is the ability to change passkey providers and move from one provider to another, if that’s what the user wants to do.

With existing password managers and legacy passwords, the process of moving credentials isn’t particularly efficient or secure, according to Rew Islam from Dashlane and Nick Steele from 1Password. It’s a situation that the Credential Provider Special Interest Group within the FIDO Alliance is looking to solve with a new standard for securely porting passwords between different password/passkey management applications.

The group is developing a new Credential Exchange Protocol that will use hybrid public key encryption to securely transfer credentials; the effort also includes the development of a standardized data format for credential information.

“By having the standard credential format, it will allow for interoperability of sharing credentials between two different providers in different organizations,” Steele said.

A proof of concept demo for the credential exchange is currently set for May, during the FIDO Member Plenary in Osaka, Japan. Islam noted that the effort represents a real triumph for the power of FIDO to bring different competitive vendors together for common purpose.

Common Questions about Passkeys 

The virtual summit was concluded with an ‘Ask Me Anything’ (AMA) session where attendees asked their most pressing questions on passkeys.

Among the big questions asked:

How should organizations consider choosing synced passkeys or device-bound passkeys from a security and usability perspective?

Turner answered that the first thing to make really clear is that synced passkeys are probably the right answer for the majority of use cases. That said, he noted that FIDO recognizes that there are some areas where people have a much higher risk profile, and in those cases the device- bound passkeys can provide an extra level of trust.

Can passkeys play a role in transaction signing?

Pedro Martinez from Thales responded that yes, passkeys can be used to sign transactions. He explained that the beauty of the FIDO protocol is that it is based on the signature of a challenge. As such, it’s possible to adjust the challenge in order to contain data related to a transaction that needs to be digitally signed.

When will passkeys be the default mode of authentication? 

Shikiar said that he doesn’t think that all passwords will go away, but he is hopeful for a passwordless future.

“Sophisticated risk engines and anomaly detectors don’t really think twice about accepting a password,” he said. “But as passkeys become more prevalent and become the default all of a sudden using a password will be anomalous in and of itself.and I think that’s when we’ll be in the fabulous future when using a password is rightfully seen as a high risk and anomalous action.”

Monday, 25. March 2024

Identity At The Center - Podcast

It’s time for a public conversation about privacy on the lat

It’s time for a public conversation about privacy on the latest episode of the Identity at the Center Podcast. We had an open conversation with Hannah Sutor, a Principal Product Manager at GitLab and IDPro Board Member, about privacy. We delved into the nuances of privacy as a human right, the expectations of privacy in our roles as employees and consumers, and much more. Check out this episode a

It’s time for a public conversation about privacy on the latest episode of the Identity at the Center Podcast. We had an open conversation with Hannah Sutor, a Principal Product Manager at GitLab and IDPro Board Member, about privacy. We delved into the nuances of privacy as a human right, the expectations of privacy in our roles as employees and consumers, and much more.

Check out this episode at idacpodcast.com or in your favorite podcast app.

#iam #podcast #idac

Friday, 22. March 2024

World Identity Network

World Identity Network Releases “Shadows in the Dark” Documentary on Amazon

WASHINGTON, March 22, 2024 /PRNewswire/ — World Identity Network (WIN), the leading nonprofit organization advocating for universal identity rights, has released its groundbreaking documentary, Shadows in the Dark: Our Global Identity Crisis, exclusively on Amazon. “Releasing this film to the public is a moment of great triumph for our organization,” says WIN Founder and CEO, D

WASHINGTON, March 22, 2024 /PRNewswire/ — World Identity Network (WIN), the leading nonprofit organization advocating for universal identity rights, has released its groundbreaking documentary, Shadows in the Dark: Our Global Identity Crisis, exclusively on Amazon.

“Releasing this film to the public is a moment of great triumph for our organization,” says WIN Founder and CEO, Dr. Mariana Dahan. “We spent years interviewing undocumented persons and refugees. Telling their stories with the utmost care, precision, and nuance was a tremendous responsibility, and we could not be happier with the final result.”

Shadows in the Dark is a sprawling saga following the stories of undocumented individuals across the United States, the Middle East, and refugee camps in Europe and beyond. The documentary shines a light on those born in the shadows of the formal economy, at the margins of society, lacking common identity documents, such as birth certificates and passports.

The movie highlights the work that Dr. Mariana Dahan has conducted at The World Bank, as the initiator and first global coordinator of the Identification for Development (ID4D) agenda, which celebrates its 10-years anniversary this year. Shadows in the Dark offers a compelling analysis of the successes and the risks associated with this multi-billion dollars program.

The Emmy Award – winning film crew has interviewed decision-makers, technologists and human rights activists, advocating for universal identification and responsible use of digital technologies, such as biometrics, facial recognition and AI.

“Identity is at the heart of many of today’s global challenges,” says Shadows in the Dark Co-director, Brad Kremer. “It is the common thread in immigration and many of the conflict zones existing throughout the world. When Dr. Mariana Dahan approached me to do this film together, I knew it would be a journey of immense meaning. But directing this narration, and telling the stories of everyone this issue impacts, has exceeded all our expectations.”

Produced in partnership with the United Nations, the Human Rights Foundation and Singularity University, Shadows in the Dark features extensive interviews with displaced Ukrainian and Syrian refugees recounting their experiences with the asylum process, along with leading officials at the World Bank and the United Nations, and the founders building new digital identity solutions. The film likewise explores nuances surrounding surveillance, authoritarian regimes, and biometric systems, as well as a dialogue with a group of far-right border advocates in the United States.

“In many ways, this film is a culmination of my life’s work,” continues Dr. Dahan. “Having been born without a birth certificate in Soviet-era Moldova, at the border with Ukraine, I know firsthand how crucial identity is to the preservation of human rights. I encourage everyone to watch the film and learn more about this global issue impacting millions. Identity is the cornerstone of human civilization”.

To learn more about Shadows in the Dark go to www.shadowsinthedark.movie

The post World Identity Network Releases “Shadows in the Dark” Documentary on Amazon appeared first on World Identity Network.


FIDO Alliance

Identity Week: HID’s 2024 report highlights mobile IDs, MFA, and sustainability in security trends

With over 83% of organisations currently using MFA, the shift away from password-dependency is clear. However, the report indicates a slower but growing implementation of Zero Trust architectures, currently in […]

With over 83% of organisations currently using MFA, the shift away from password-dependency is clear. However, the report indicates a slower but growing implementation of Zero Trust architectures, currently in place in up to 16% of larger organisations. The development of standards like FIDO heralds a move toward more secure authentication options.


Neowin: Proton Pass gets passkey support for both free and paid users

Proton has announced passkey support in its Proton Pass password manager, which now offers enhanced security and usability for both free and paid users across all platforms.

Proton has announced passkey support in its Proton Pass password manager, which now offers enhanced security and usability for both free and paid users across all platforms.


Biometric Update: FIDO’s influence expands with new security key and board member

Cisco has further solidified its commitment to passkeys by joining the FIDO Alliance’s board of member representatives. Andrew Shikiar, executive director and CEO of the FIDO Alliance welcomes Cisco’s expanded […]

Cisco has further solidified its commitment to passkeys by joining the FIDO Alliance’s board of member representatives. Andrew Shikiar, executive director and CEO of the FIDO Alliance welcomes Cisco’s expanded involvement, noting their historical contributions through Duo Security and now as an official member.


Elastos Foundation

Bitcoin Layer 2 Evolution: Unveiling BeL2’s BTC Oracle with Elastos

The launch of BeL2’s BTC Oracle marks a critical juncture,  a paradigm shift in how Bitcoin interacts with the broader ecosystem of decentralised applications (DApps) and Ethereum Virtual Machine (EVM) compatible blockchains. Bitcoin, as the first cryptocurrency, has long been critiqued for its limitations in scalability and flexibility, particularly in the context of smart contracts […]

The launch of BeL2’s BTC Oracle marks a critical juncture,  a paradigm shift in how Bitcoin interacts with the broader ecosystem of decentralised applications (DApps) and Ethereum Virtual Machine (EVM) compatible blockchains.

Bitcoin, as the first cryptocurrency, has long been critiqued for its limitations in scalability and flexibility, particularly in the context of smart contracts and DApps. The introduction of BeL2 and its BTC Oracle addresses these critiques head-on by generating zero-knowledge proofs (ZKPs) to enable secure, private, and efficient communication between Bitcoin and EVM blockchains. This development is crucial because it expands Bitcoin’s utility beyond being a mere store of value to a foundational layer upon which complex decentralised applications can be built and managed directly.

 

 

The Core

The core of this innovation lies in BeL2’s BTC Oracle. The BTC Oracle generates ZKPs to feed real-time Bitcoin transaction data into EVM smart contracts without compromising the privacy or security of the transactions. This functionality is revolutionary, as it allows for the creation of Bitcoin-denominated smart contracts across any EVM-compatible blockchain, vastly expanding the potential use cases and applications for Bitcoin in the decentralised finance (DeFi) space.

BeL2, or Bitcoin Layer 2, further extends this capability by providing a framework for developing and managing Bitcoin-native smart contracts. It represents the culmination of efforts to integrate Bitcoin more deeply into the ecosystem of decentralised applications, enabling novel financial products and services such as BTC lending, algorithmic stablecoin issuance, and more.

 

The Mechanism

BeL2’s technology stack comprises a BTC Oracle that inputs Bitcoin-related data into EVM contracts, an upcoming ELA-powered relay network to decentralise and secure the data transmission, and the application layer where the actual development of Bitcoin-native smart contracts takes place.

This approach minimises reliance on intermediaries, reduces points of failure, and enhances the system’s overall resilience and efficiency. BeL2’s BTC Oracle is centred around enhancing Bitcoin’s utility and accessibility, involving innovative cryptographic techniques like ZKPs to deliver a comprehensive solution for Bitcoin and EVM blockchain interoperability.

 

 

The Impact

By enabling direct development on Bitcoin Layer 2, Elastos is not just augmenting Bitcoin’s functionality; it is redefining the possibilities of the blockchain space. The ability for any EVM blockchain to leverage Bitcoin in smart contracts opens up new avenues for innovation, potentially increasing the market for Bitcoin-based applications sevenfold.

This development aligns with the broader trend of seeking solutions that respect the foundational principles of blockchain technology—decentralisation, security, and user sovereignty—while pushing the boundaries of what’s possible. It embodies a non-consensus, forward-thinking approach that challenges conventional limitations and opens up new opportunities for the entire crypto ecosystem.

In conclusion, the launch of Elastos’ BTC Oracle and BeL2 platform represents a significant milestone in the evolution of Bitcoin and blockchain technology. By addressing fundamental challenges of interoperability and functionality, Bitcoin’s value is not just in its scarcity and security but in its utility and integration into the decentralised web.

Try the BeL2 demo here!


DIDAS

Parallel Signatures – a relevant input to the Technology Discussion

To enhance the Swiss e-ID framework with selective disclosure while ensuring unlinkability, it's imperative to incorporate advanced digital signature technologies such as BBS+ signatures. These technologies not only fortify the security of digital credentials but also significantly enhance user privacy. Such capabilities are crucial in minimizing the risk of personal data exposure and ensuring ...

To enhance the Swiss e-ID framework with selective disclosure while ensuring unlinkability, it’s imperative to incorporate advanced digital signature technologies such as BBS+ signatures. These technologies not only fortify the security of digital credentials but also significantly enhance user privacy. Such capabilities are crucial in minimizing the risk of personal data exposure and ensuring that users retain control over their information. It’s essential to continuously align our Trust Infrastructure with international cryptographic standards while remaining adaptable to emerging norms. This approach will facilitate interoperability across borders and sectors, ensuring that e-ID systems are both secure and universally recognized.

The parallel signatures model involves attaching multiple digital signatures to a single document or payload, with each signature providing different security or privacy features. This approach allows for a flexible and robust security framework, accommodating various cryptographic standards and privacy needs without compromising the integrity of the original document. It’s particularly useful in environments requiring adherence to diverse regulatory standards or in scenarios where resilience and both, high security and privacy are paramount. Cryptographic layering supports adaptiveness by incorporating multiple layers of cryptographic techniques within a system. This approach allows for the seamless integration and removal of cryptographic methods as needed by the Trust Ecosystem governance, enabling the system to adapt to evolving security threats and advancements in cryptographic research. It ensures long-term resilience and flexibility, allowing systems to maintain security without complete overhauls. Applying cryptographic schemes always mandates careful handling of private keys. Preventing their exposure is vital, even more so when using advanced schemes supporting derivative keys, as possible with BBS+. This underscores the need for strict security measures to prevent unauthorized access and ensure the system’s integrity.

Public-Private Partnerships (PPPs) represent a proven strategic model to operationalize digital trust and -identity solutions, combining public oversight with private sector efficiency and innovation. Such partnerships should be structured to encourage shared investment and risk, with a clear focus on public interest, global standards and local governance, protection of digital sovereignty and value-based adoption. These initiatives should be complemented by ongoing research into cryptographic innovations, preparing the ground for future advancements in e-ID security and privacy.

To address the challenges comprehensively and to build a continuously improving framework that is not only secure and compliant but also resilient and forward-looking, we must evaluate to invest in an independent body that accompanies the further progress in technology, governance and supports public and private sector adoption – to benefit from the opportunities of a trusted digital economy in the long term.

Thank you DIDAS Technology Working Group and Manu Sporny of Digital Bazaar for the dialogue!


MyData

A Recorded Delivery Network… for Data

In the MyData Matters blog series, MyData members introduce innovative solutions and practical use cases that leverage personal data in line with MyData values. Since the 1980s, personal data has been managed in essentially the same way. Organisations aggregate customer information in vast data warehouses, with the assumption that more data is always better to […]
In the MyData Matters blog series, MyData members introduce innovative solutions and practical use cases that leverage personal data in line with MyData values. Since the 1980s, personal data has been managed in essentially the same way. Organisations aggregate customer information in vast data warehouses, with the assumption that more data is always better to […]

Thursday, 21. March 2024

Digital ID for Canadians

DIACC Women in Identity: Marli Lichtman

DIACC women in identity spotlights showcase outstanding DIACC member women in identity. If you are a DIACC member woman in identity and would like us…

DIACC women in identity spotlights showcase outstanding DIACC member women in identity. If you are a DIACC member woman in identity and would like us to feature you in the spotlight, contact us!

Marli Lichtman is Managing Director and Head, Digital Strategy and Controls at BMO Financial Group, BMO.

Follow Marli on LinkedIn

What has your career journey looked like?

Let me work backwards, starting with my current role as Head of Digital Strategy and Controls at BMO. In this role, I lead two teams accountable for: (1) Strategy: defining and executing BMO’s “Digital First” agenda and (2) Controls: working in partnership with the Financial Crimes Unit to build and enhance digital controls to protect our customers against fraud.

I initially joined BMO’s Corporate Strategy Team in 2013 and since then have worked in progressively senior roles across Finance, Risk, Transformation and Business Operations.

Before joining BMO, I was a consultant in Oliver Wyman’s Finance and Risk Practice and prior to that, I worked in wealth management and earned my CFA (Chartered Financial Analyst) designation. My first job out of school was at a boutique investment advisory firm. I graduated from Ivey at Western University with an Honours Business Administration (HBA) degree.

When you were 20 years old, did you visualize a dream job and if so, why?

I didn’t really know what I wanted to do when I was 20! I focused the early days of my career on finding opportunities where I could be challenged, learn as much as possible, maintain optionality to transition to other industries or career paths, and work with great people who would champion my career.

Have you encountered significant barriers in your career as a woman in leadership, and if so, what were they?

I have experienced many of the usual challenges you hear about concerning women in the workplace. However, my biggest barrier has been getting into my own head and thinking that I don’t deserve the positions I’ve been given (I mean, earned 😊). Through executive coaching, mentors, sponsors, and simply the experience of failing and rebounding, I’ve been able to overcome this (although I would be lying if I said I don’t experience imposter syndrome from time to time!).

How do you balance work and life responsibilities?

It’s a constant juggling act, but I try to focus on 5 things:

Regular calendar reviews to “optimize” my time (e.g., which calls can I take from the car on my way to / from the office?) Learning to say “no” and setting clear boundaries (applies to both work and personal life). Finding time for self-care. Working as a team with my partner who is also balancing a demanding schedule. Living my values and knowing what’s important in life.

How can more women be encouraged to pursue digital trust and identity careers?

We need to start with education – What is Digital ID? What skillsets do you need to enter the space? Why is diversity so important? Who are female trailblazers in the space, and what has their career path looked like? Early exposure, encouragement, and mentorship are key to increasing female representation in this space.

What are some strategies you have learned to help women achieve a more prominent role in their organizations?

Build meaningful relationships. Earn the trust of your colleagues. Network within and outside of your industry. Ensure you have a mentor and a sponsor at your organization. Most importantly, stay true to yourself.

What will be the biggest challenge for the generation of women behind you?

While women have made considerable progress over the past decade, there is still more work to do. The next generation will continue to face the same challenges (e.g., gender bias, pay inequality, balancing personal life) but will benefit from increased female representation and sponsorship at Senior levels.

What advice would you give to young women entering the field?

Be confident – you are in the field for a reason! Trust your instincts, and don’t be too hard on yourself.


Ceramic Network

Toward the first decentralized points system: Oamo becomes the first points provider on Ceramic

We're thrilled to announce that Oamo is partnering with Ceramic as a data provider on the platform. Oamo will issue tens of millions of publicly available credentials based on wallets’ on-chain activity and holdings. This is the first step in a broader initiative to develop and standardize

We're thrilled to announce that Oamo is partnering with Ceramic as a data provider on the platform. Oamo will issue tens of millions of publicly available credentials based on wallets’ on-chain activity and holdings. This is the first step in a broader initiative to develop and standardize the first decentralized point system, powered by Ceramic and Oamo’s credential models.

Oamo has been a big supporter of the Ceramic ecosystem from day one. By harnessing Ceramic's innovative DID (Decentralized Identifier) infrastructure and ComposeDB for zero-party data storage, they’re setting the foundation for a future where user data is private by design, perishable at will, and accessible only with explicit permission. Oamo and Ceramic are crafting a path toward a consensual and rewarding digital ecosystem.

The partnership so far

Since launching on Ceramic in Q3 2023, Oamo has witnessed remarkable results – over 65,000 Oamo Profiles have been created, with more than 200,000 Ceramic documents generated. Additionally, Oamo has distributed over 400,000 credentials spanning Web2 and on-chain behaviors, enriching the digital identity and access privileges of Oamo Profile users across various platforms.

Oamo credentials cover:

On-chain activity across DeFi, NFTs, staking and gaming; Wallet holdings including major ERC-20s and NFT collections; and Social activity across Web2 platforms like Discord, Youtube and X. Supercharging the Ceramic ecosystem

With this partnership and announcement, Oamo aims to enhance digital identity and engagement through:

Credential Distribution
Oamo has indexed millions of EVM wallets’ behaviors and holdings, and will be distributing tens of millions of publicly available credentials to enrich user identities across platforms, ensuring the security and verification of online activities. These credentials can then be used to:Credentials issued will be maintained and updated monthly to include time decay and ensure they always represent the latest behaviors of the indexed wallets. Feedback from the community is welcome to develop new credentials that track the most relevant on-chain behaviors for builders in the ecosystem. Compile specific wallet lists for airdrops. Establish reputation frameworks based on behavioral data points. Launch strategic user acquisition campaigns by identifying wallets in a specific target audience and contacting them via XMTP for example. Decentralized Point System
Oamo will leverage its credential models to develop the first standardized decentralized point system on Ceramic, with each indexed wallet receiving its own scorecard based on its on-chain activity and holdings. Builders in the ecosystem will be able to leverage these scorecards and customize their own points system with their own credentials and Oamo’s. Credential & Points Management SDK
Oamo will release an SDK to allow any builder to search and leverage Oamo’s credentials and points system easily. This middleware will also allow builders to issue their own credentials and points based on their own models and app activity. What’s in it for users

Anyone creating their Decentralized Identifier (DID) on the Ceramic Network (by creating an Oamo Profile, for instance) will be able to claim their credentials and scorecards seamlessly. This open and inclusive approach democratizes access to digital credentials, ensuring users from all backgrounds and levels of onchain experience can benefit from Ceramic’s ecosystem of builders.

What’s in it for developers

Oamo's vision includes diverse use cases, transforming how developers interact with consumers. The Oamo platform offers endless opportunities for various types of protocols and apps:

DeFi Protocols
Easily find wallets matching their target audience, such as active liquidity providers on leading AMMs or active traders on DEXes across major EVM chains. NFT Projects
Identify potential collectors based on their NFT holdings and distribute collections to the right user base. Wallet Providers
Identify and reach whales holding specific token amounts across multiple chains. Liquid Staking Projects
Identify wallets holding significant ETH amounts and generating yield via lending protocols as high-value acquisition targets. Game Developers
Find gamers in Web3 that hold specific NFTs or have engaged with similar on-chain games.

While the Oamo app provides a hub for user acquisition and relationship development, this publicly available tooling and data will allow anyone to craft their own strategies.

Builders on the Ceramic Network will have the capability to query, consume, and customize issued credentials and points to power new data-rich use cases, such as targeted airdrops, credential-gated experiences, loyalty programs, and more. To streamline integrations, Oamo will be launching an SDK, making it easier for developers to incorporate these capabilities into their own projects.

Join the Ceramic Discord and Oamo’s Telegram channel for builders to contribute or be notified about updates and releases.

About Ceramic

Ceramic is a decentralized data network for managing verifiable data at scale, combining the trust and composability of a blockchain with the flexibility of an event-driven architecture to help organizations get more value from their data. Thousands of developers use it to manage reputation data, store attestations, log user activity, and build novel data infrastructure. Ceramic frees entrepreneurs from the constraints of traditional siloed infrastructure, letting them tap into a vibrant data ecosystem to bring their unique vision to life faster.

About Oamo

Oamo allows consumers to discover and match with their favorite brands based on their online activity. Brands can define their ideal user persona based on their online behaviors, optionally incentivize data sharing via token rewards, and design personalized conversion and retention campaigns to acquire power users. Zero-party data guarantees an optimal match between interested consumers and brands through rich behavioral alignment, leading to higher conversion rates and LTV.


Origin Trail

Decentralized RAG with OriginTrail DKG and NVIDIA Build ecosystem

Introduction Generative Artificial Intelligence (AI) is already reaching relevant adoption across multiple fields, however, some of its limitations are significantly hurting the potential of mainstream adoption and delivering improvements in all fields of modern humanity. For GenAI to be production-ready for such a scale of impact we need to limit hallucinations, manage bias, and reject intellect
Introduction

Generative Artificial Intelligence (AI) is already reaching relevant adoption across multiple fields, however, some of its limitations are significantly hurting the potential of mainstream adoption and delivering improvements in all fields of modern humanity. For GenAI to be production-ready for such a scale of impact we need to limit hallucinations, manage bias, and reject intellectual property (or data ownership) infringements. The promise of Verifiable Internet for AI is to address these shortfalls by providing information provenance in model outputs, ensuring verifiability of presented information, respecting data ownership, and incentivizing new knowledge creation.

Below we’re showcasing the implementation framework called Decentralized Retrieval-Augmented Generation (dRAG) on the NVIDIA Build ecosystem, combining an ample amount of powerful models across industries and types. dRAG is advancing the Retrieval-Augmented Generation (RAG) framework proposed by Patrick Lewis in an attempt to increase accuracy and reliability of GenAI models with facts fetched from external sources. The RAG framework has gained prominence both among AI developers and the leading tech companies’ leaders, such as NVIDIA’s CEO Jensen Huang.

The dRAG advances the RAG system by leveraging the Decentralized Knowledge Graph (DKG), a permissionless network of Knowledge Assets. Each Knowledge Asset contains Graph data and/or Vector embeddings, immutability proofs, a Decentralized Identifier (DID), and the ownership NFT. When connected in one permission-less DKG, the following capabilities are enabled:

Knowledge Graphs — structural knowledge in knowledge graphs allows a hybrid of neural and symbolic AI methodologies, enhancing the GenAI models with deterministic inputs. Ownership — dRAG uses input from Knowledge Assets that have an owner that can manage access to the data contained in the Knowledge Asset. Verifiability — every piece of knowledge on the DKG has cryptographic proofs published ensuring that no tampering has occurred since it was published.

In this tutorial, you will learn how to query the OriginTrail DKG and retrieve verified Knowledge Assets on the DKG.

Prerequisites A NVIDIA build platform account and API key. A DKG node. Please visit the official docs to learn how to set one up. A Python project with a virtual environment set up. Step 1 — Installing packages and setting up dkg.py

In this step, you’ll install the necessary packages using pip and set up the credentials for dkg.py.

Navigate to your Python project’s environment and run the following command to install the packages:

pip install openai dkg python-dotenv annoy

The OpenAI client is going to act as an intermediary for interacting with the NVIDIA API. You’ll store the environment variables in a file called .env. Create and open it for editing in your favorite editor:

nano .env

Add the following lines:

OT_NODE_HOSTNAME="your_ot_node_hostname"
PRIVATE_KEY="your_private_key"
NVIDIA_API_TOKEN="your_nvidia_api_token"

Replace the values with your own, which you can find in the configuration file of your OT Node, as well as your wallet’s private key in order to perform the Knowledge Asset create operation, which needs to be funded with TRAC tokens (more information available in the OriginTrail documentation). Keep in mind that this information should be kept private, especially your wallet’s key. When you’re done, save and close the file.

Then, create a Python file where you’ll store the code for connecting to the DKG:

nano dkg_version.py

Add the following code to the file:

from dkg import DKG
from dkg.providers import BlockchainProvider, NodeHTTPProvider
from dotenv import load_dotenv
import os
import json

dotenv_path = './.nvidia.env' # Replace with your .env file address
load_dotenv(dotenv_path)
ot_node_hostname = os.getenv('OT_NODE_HOSTNAME')
private_key = os.getenv('PRIVATE_KEY')

node_provider = NodeHTTPProvider(ot_node_hostname)
blockchain_provider = BlockchainProvider("testnet", "otp:20430", private_key=private_key)

dkg = DKG(node_provider, blockchain_provider)
print(dkg.node.info)

Here, you first import the required classes and packages. Then, you load the values from .env and instantiate a NodeHTTPProvider and BlockchainProvider with those values, which you pass in to the DKG constructor, creating the dkg object for communicating with the graph.

If all credentials and values are correct, the output will show you the version that your OT Node is running on:

{'version': '6.2.3'}

That’s all you have to do to be connected to the DKG!

Step 2 — Instructing the LLM to create Knowledge assets on the DKG

In this step, you’ll connect to the NVIDIA API using the OpenAI Python library. Then, you’ll instruct it to generate

First, you need to initialize the OpenAI class, passing in the NVIDIA API as the base_url along with your API key. OpenAI acts as an intermediary to the NVIDIA API here, and will be able to use multiple LLM models, such as the Google Gemma and Meta’s Llama which are used in the tutorial.

from openai import OpenAI

client = OpenAI(
base_url = "https://integrate.api.nvidia.com/v1",
api_key = os.getenv('NVIDIA_API_TOKEN')
)

Then, you define the instructions, telling the model what to do:

instruction_message = '''
Your task is the following:

Construct a JSON object following the Product JSON-LD schema based on the provided information by the user.
The user will provide the name, description, tags, category and deployer of the product, as well as the URL which you will use as the '@id'.

Here's an example of an Product that corresponds to the mentioned JSON-LD schema.:
{
"@context": "http://schema.org",
"@type": "Product",
"@id": "https://build.nvidia.com/nvidia/ai-weather-forecasting",
"name": "ai-weather-forecasting",
"description": "AI-based weather prediction pipeline with global models and downscaling models.",
"tags": [
"ai weather prediction",
"climate science"
],
"category": "Industrial",
"deployer": "nvidia"
}

Follow the provided JSON-LD schema, using the provided properties and DO NOT add or remove any one of them.
Output the JSON as a string, between ```json and ```.
'''

chat_history = [{"role":"system","content":instruction_message}]

As part of the instructions, you provide the model with an example Product definition, according to which a new one should be generated. We want to create a Knowledge Asset which will represent the ‘rerank-qa-mistral-4b’ model from the NVIDIA Build platform. You add the contents of that message to chat_history with a system role, meaning that it instructs the model before the user comes in with actionable prompts.

Then, you define an example user_instruction for testing the model:

user_instruction = '''I want to create a product (model) with name 'rerank-qa-mistral-4b', which is a GPU-accelerated model optimized for providing a probability score
that a given passage contains the information to answer a question. It's in category Retrieval and deployed by nvidia.
It's used for ranking and retrieval augmented generation. You can reach it at https://build.nvidia.com/nvidia/rerank-qa-mistral-4b. Give me the schema JSON LD object.'''

This user prompt wants the LLM to output a Product with the given name and gives information as to where that model can be found.

Finally, you can ask the LLM to compute the output and print it:

completion = client.chat.completions.create(
model="google/gemma-7b",
messages=chat_history + [{"role":"user","content":user_instruction}],
temperature=0,
top_p=1,
max_tokens=1024,
)

generated_json = completion.choices[0].message.content
print(generated_json)

The output will look like this:

```json
{
"@context": "http://schema.org",
"@type": "Product",
"@id": "https://build.nvidia.com/nvidia/rerank-qa-mistral-4b",
"name": "rerank-qa-mistral-4b",
"description": "GPU-accelerated model optimized for providing a probability score that a given passage contains the information to answer a question.",
"tags": [
"rerank-qa-mistral-4b",
"information retrieval",
"retrieval augmentation"
],
"category": "Retrieval",
"deployer": "nvidia"
}
```

The LLM has returned a JSON-LD structure that can be added to the DKG.

def clean_json_string(input_string):
if input_string.startswith("```json") and input_string.endswith("```"):
cleaned_query = input_string[7:-3].strip()
return cleaned_query
elif input_string.startswith("```") and input_string.endswith("```"):
cleaned_query = input_string[3:-3].strip()
else:
return input_string

product = json.loads(clean_json_string(generated_json))

content = {"public": product}
create_asset_result = dkg.asset.create(content, 2)
print('Asset created!')
print(json.dumps(create_asset_result, indent=4))
print(create_asset_result["UAL"])

Here you first define a function (clean_json_string) that will clean up the JSON string and remove the Markdown code markup. Then, you load the product by deserializing the JSON and add it to the DKG by calling dkg.asset.create().

The output will look like this:

Asset created!
{
"publicAssertionId": "0x09d8d7c5b82bd09bc3f51770f575e15f1157c6292652d977afbe453932e270ef",
"operation": {
"mintKnowledgeAsset": {
"transactionHash": "0x6fb8a6039f97cf3c0d8cb8b1a221be405d4a7cbdeab7f27240ae9848322cad98",
"transactionIndex": 0,
"blockHash": "0x7f73351931b89c4c75d3bf66206df69fc55cb3a0f6e126545d82cfe3e81c0d6a",
"from": "0xD988B6fd921CFab980a7f2F60B9aC9F7918D7F71",
"to": "0xB25D47412721f681f1EaffD1b67ff0638C06f2B7",
"blockNumber": 3674556,
"cumulativeGasUsed": 397582,
"gasUsed": 397582,
"contractAddress": null,
"logs": [
{
"address": "0x1A061136Ed9f5eD69395f18961a0a535EF4B3E5f",
"topics": [
"0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef",
"0x0000000000000000000000000000000000000000000000000000000000000000",
"0x000000000000000000000000d988b6fd921cfab980a7f2f60b9ac9f7918d7f71",
"0x000000000000000000000000000000000000000000000000000000000027fb68"
],
"data": "0x",
"blockHash": "0x7f73351931b89c4c75d3bf66206df69fc55cb3a0f6e126545d82cfe3e81c0d6a",
"blockNumber": 3674556,
"transactionHash": "0x6fb8a6039f97cf3c0d8cb8b1a221be405d4a7cbdeab7f27240ae9848322cad98",
"transactionIndex": 0,
"logIndex": 0,
"transactionLogIndex": "0x0",
"removed": false
},
{
"address": "0xf305D2d97C7201Cea2A54A2B074baC2EdfCE7E45",
"topics": [
"0x6228bc6c1a8f028a2e3476a455a34f5fa23b4387611f3c147a965e375ebd17ba",
"0x09d8d7c5b82bd09bc3f51770f575e15f1157c6292652d977afbe453932e270ef"
],
"data": "0x00000000000000000000000000000000000000000000000000000000000003e700000000000000000000000000000000000000000000000000000000000000080000000000000000000000000000000000000000000000000000000000000008",
"blockHash": "0x7f73351931b89c4c75d3bf66206df69fc55cb3a0f6e126545d82cfe3e81c0d6a",
"blockNumber": 3674556,
"transactionHash": "0x6fb8a6039f97cf3c0d8cb8b1a221be405d4a7cbdeab7f27240ae9848322cad98",
"transactionIndex": 0,
"logIndex": 1,
"transactionLogIndex": "0x1",
"removed": false
},
{
"address": "0xFfFFFFff00000000000000000000000000000001",
"topics": [
"0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef",
"0x000000000000000000000000d988b6fd921cfab980a7f2f60b9ac9f7918d7f71",
"0x000000000000000000000000f43b6a63f3f6479c8f972d95858a1684d5f129f5"
],
"data": "0x0000000000000000000000000000000000000000000000000000000000000006",
"blockHash": "0x7f73351931b89c4c75d3bf66206df69fc55cb3a0f6e126545d82cfe3e81c0d6a",
"blockNumber": 3674556,
"transactionHash": "0x6fb8a6039f97cf3c0d8cb8b1a221be405d4a7cbdeab7f27240ae9848322cad98",
"transactionIndex": 0,
"logIndex": 2,
"transactionLogIndex": "0x2",
"removed": false
},
{
"address": "0x082AC991000F6e8aF99679f5A2F46cB2Be4E101B",
"topics": [
"0x4b81188c3c973dd634ec0dae5b7e72f92bb03834c830739d63935923950d6f64",
"0x0000000000000000000000001a061136ed9f5ed69395f18961a0a535ef4b3e5f",
"0x000000000000000000000000000000000000000000000000000000000027fb68"
],
"data": "0x00000000000000000000000000000000000000000000000000000000000000c000000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000065fc48a00000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000076a700000000000000000000000000000000000000000000000000000000000000000600000000000000000000000000000000000000000000000000000000000000341a061136ed9f5ed69395f18961a0a535ef4b3e5f09d8d7c5b82bd09bc3f51770f575e15f1157c6292652d977afbe453932e270ef000000000000000000000000",
"blockHash": "0x7f73351931b89c4c75d3bf66206df69fc55cb3a0f6e126545d82cfe3e81c0d6a",
"blockNumber": 3674556,
"transactionHash": "0x6fb8a6039f97cf3c0d8cb8b1a221be405d4a7cbdeab7f27240ae9848322cad98",
"transactionIndex": 0,
"logIndex": 3,
"transactionLogIndex": "0x3",
"removed": false
},
{
"address": "0xB25D47412721f681f1EaffD1b67ff0638C06f2B7",
"topics": [
"0x60e45db7c8cb9f55f92f3de18053b0b426eb919a763a1daca0ea9ad20961e878",
"0x0000000000000000000000001a061136ed9f5ed69395f18961a0a535ef4b3e5f",
"0x000000000000000000000000000000000000000000000000000000000027fb68",
"0x09d8d7c5b82bd09bc3f51770f575e15f1157c6292652d977afbe453932e270ef"
],
"data": "0x",
"blockHash": "0x7f73351931b89c4c75d3bf66206df69fc55cb3a0f6e126545d82cfe3e81c0d6a",
"blockNumber": 3674556,
"transactionHash": "0x6fb8a6039f97cf3c0d8cb8b1a221be405d4a7cbdeab7f27240ae9848322cad98",
"transactionIndex": 0,
"logIndex": 4,
"transactionLogIndex": "0x4",
"removed": false
}
],
"logsBloom": "0x00000100400000000000800000000000000000000000000000000000000000000000010020000000000000000000000000000000000010800000000000001000000040000000400040000008002400000080000000004000000000000000000000040000020000000000000000000a00000000008000020000000010000210015000000000000000000080000000001000000000000000000000000200000000040000001020002002000000000000000000000000000000000000000000000000000002000000000000000000008004000000000000010000000000000020000000000000002800000000000000000000000000000000100000000000010000",
"status": 1,
"effectiveGasPrice": 40,
"type": 0
},
"publish": {
"operationId": "1bb622c7-8fa1-4414-b39e-0aaf3f5465f9",
"status": "COMPLETED"
}
},
"UAL": "did:dkg:otp:20430/0x1a061136ed9f5ed69395f18961a0a535ef4b3e5f/2620264"
}

Here we can see a lot of useful information, such as the Knowledge Asset issuer, transaction IDs from the blockchain, and the status of the operation, which was completed. The UAL returned is the Uniform Asset Locator, a decentralized identifier connected to each Knowledge Asset on the DKG.

Then, you can retrieve the same product, but from the DKG by passing in the UAL to dkg.asset.get(). The output will look like this:

get_asset_result = dkg.asset.get(create_asset_result["UAL"])
print(json.dumps(get_asset_result, indent=4))

The output will be:

did:dkg:otp:20430/0x1a061136ed9f5ed69395f18961a0a535ef4b3e5f/2620264
{
"operation": {
"publicGet": {
"operationId": "c138515a-d82c-45a8-bef9-82c7edf2ef6b",
"status": "COMPLETED"
}
},
"public": {
"assertion": "<https://build.nvidia.com/nvidia/rerank-qa-mistral-4b> <http://schema.org/category> \"Retrieval\" .\n<https://build.nvidia.com/nvidia/rerank-qa-mistral-4b> <http://schema.org/deployer> \"nvidia\" .\n<https://build.nvidia.com/nvidia/rerank-qa-mistral-4b> <http://schema.org/description> \"GPU-accelerated model optimized for providing a probability score that a given passage contains the information to answer a question.\" .\n<https://build.nvidia.com/nvidia/rerank-qa-mistral-4b> <http://schema.org/name> \"rerank-qa-mistral-4b\" .\n<https://build.nvidia.com/nvidia/rerank-qa-mistral-4b> <http://schema.org/tags> \"information retrieval\" .\n<https://build.nvidia.com/nvidia/rerank-qa-mistral-4b> <http://schema.org/tags> \"rerank-qa-mistral-4b\" .\n<https://build.nvidia.com/nvidia/rerank-qa-mistral-4b> <http://schema.org/tags> \"text retrieval\" .\n<https://build.nvidia.com/nvidia/rerank-qa-mistral-4b> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://schema.org/Product> .",
"assertionId": "0x09d8d7c5b82bd09bc3f51770f575e15f1157c6292652d977afbe453932e270ef"
}
}

In this step, you’ve seen how to instruct the NVIDIA LLM to generate Product entities according to user prompts, and how to insert them into the DKG. You’ll now learn how to generate SPARQL queries for products using the LLM.

Step 3 — Generating SPARQL with the AI model

In this step, you’ll use the NVIDIA LLM to generate a SPARQL query for retrieving results from the DKG. The data that we’ll be querying consists of Knowledge Assets that represent each of the models from the NVIDIA Build platform — with the same properties as the one created in Step 2.

SPARQL is a query language for graphs and is very similar to SQL. Just like SQL, it has a SELECT and a WHERE clause, so as long as you’re familiar with SQL you should be able to understand the structure of the queries pretty well.

The data that you’ll be querying is related to Products, stored in the DKG as Knowledge Assets.

Similarly to before, you’ll need to instruct the LLM on what to do:

all_categories = ["Biology", "Gaming", "Visual Design", "Industrial", "Reasoning", "Retrieval", "Speech"];
all_tags = ["3d-generation", "automatic speech recognition", "chat", "digital humans", "docking", "drug discovery", "embeddings", "gaming", "healthcare", "image generation", "image modification", "image understanding", "language generation", "molecule generation", "nvidia nim", "protein folding", "ranking", "retrieval augmented generation", "route optimization", "text-to-3d", "advanced reasoning", "ai weather prediction", "climate science"];

instruction_message = '''
You have access to data connected to the new NVIDIA Build platform and the products available there.
You have a schema in JSON-LD format that outlines the structure and relationships of the data you are dealing with.
Based on this schema, you need to construct a SPARQL query to retrieve specific information from the NVIDIA products dataset that follows this schema.

The schema is focused on AI products and includes various properties such as name, description, category, deployer, URL and tags related to the product.
My goal with the SPARQL queries is to retrieve data from the graph about the products, based on the natural language question that the user posed.

Here's an example of a query to find products from category "AI Weather Prediction":
```sparql
PREFIX schema: <http://schema.org/>

SELECT ?product ?name ?description ?ual

WHERE { ?product a schema:Product ;
GRAPH ?g
{ ?product schema:tags "ai weather prediction" ; schema:name ?name ; schema:description ?description }

?ual schema:assertion ?g
FILTER(CONTAINS(str(?ual), "20430")) }```

Pay attention to retrieving the UAL, this is a mandatory step of all your queries. After getting the product with '?product a schema:Product ;' you should wrap the next conditions around GRAPH ?g { }, and later use the graph retrieved (g) to get the UAL like in the example above.

Make sure you ALWAYS retrieve the UAL no matter what the user asks for and filter whether it contains "2043".
Make sure you always retrieve the NAME and the DESCRIPTION of the products.

Only return the SPARQL query wrapped in ```sparql ``` and DO NOT return anything extra.
'''

limitations_instruction = '''\nThe existing categories are: {}. The existing tags are: {}'''.format(all_categories, all_tags)
user_instruction = '''Give me all NVIDIA tools which I can use for use cases related to biology.'''

chat_history = [{"role":"system","content":instruction_message + limitations_instruction}, {"role":"user","content":user_instruction}]

The instruction_message prompt contains the instructions in natural language. You provide the model with a schema of a Product object (in JSON-LD notation) and an example SPARQL query in the appropriate format for the DKG. You also order it to pay attention to the examples and to return nothing else except the SPARQL query.

You can now define the chat history and pass in a user prompt to get the resulting code:

limitations_instruction = '''\nThe existing categories are: {}. The existing tags are: {}'''.format(all_categories, all_tags)
user_instruction = '''Give me all NVIDIA tools which I can use for use cases related to biology.'''

chat_history = [{"role":"system","content":instruction_message + limitations_instruction}, {"role":"user","content":user_instruction}]

completion = client.chat.completions.create(
model="meta/llama2-70b", # NVIDIA lets you choose any LLM from the platform
messages=chat_history,
temperature=0,
top_p=1,
max_tokens=1024,
)

answer = completion.choices[0].message.content
print(answer)

The output will look similar to this:

```sparql
PREFIX schema: <http://schema.org/>

SELECT ?product ?name ?description

WHERE { ?product a schema:Product ;
GRAPH ?g
{ ?product schema:category "Biology" ;
?product schema:name ?name ;
?product schema:description ?description }

?ual schema:assertion ?g
FILTER(CONTAINS(str(?ual), "20430")) }
```

This SPARQL query retrieves all products that have the category "Biology" and returns their names and descriptions. The `GRAPH ?g` clause is used to retrieve the graph that contains the product information, and the `FILTER` clause is used to filter the results to only include products that have a UAL that contains "20430".
```

You can employ a similar strategy to clean the result from the Markdown code formatting:

def clean_sparql_query(input_string):
start_index = input_string.find("```sparql")
end_index = input_string.find("```", start_index + 1)
if start_index != -1 and end_index != -1:
cleaned_query = input_string[start_index + 9:end_index].strip()
return cleaned_query
else:
return input_string

query = clean_sparql_query(answer)
print(query)

The output will now be clean SPARQL:

PREFIX schema: <http://schema.org/>

SELECT ?product ?name ?description

WHERE { ?product a schema:Product ;
GRAPH ?g
{ ?product schema:category "Biology" ;
?product schema:name ?name ;
?product schema:description ?description }

?ual schema:assertion ?g
FILTER(CONTAINS(str(?ual), "20430")) }
```

This SPARQL query retrieves all products that have the category "Biology" and returns their names and descriptions. The `GRAPH ?g` clause is used to retrieve the graph that contains the product information, and the `FILTER` clause is used to filter the results to only include products that have a UAL that contains "20430". Step 4 — Querying the OriginTrail DKG

Querying the DKG is very easy with SPARQL. You only need to specify the query and the repository to search:

query_result = dkg.graph.query(query, "privateCurrent")
print(query_result)

The privateCurrent option ensures that the SPARQL query retrieves the latest state of Knowledge Assets in the DKG, as it includes the private and public data of the latest finalized state of the Graph.

An example result for the above query looks like this:

[
{
'product': 'https: //build.nvidia.com/nvidia/molmim-generate',
'description': '"MolMIM performs controlled generation, finding molecules with the right properties."',
'name': '"molmim-generate"',
'ual': 'did: dkg: otp: 20430/0x1a061136ed9f5ed69395f18961a0a535ef4b3e5f/2619549'
},
{
'product': 'https: //build.nvidia.com/meta/esmfold',
'description': '"Predicts the 3D structure of a protein from its amino acid sequence."',
'name': '"esmfold"',
'ual': 'did: dkg: otp: 20430/0x1a061136ed9f5ed69395f18961a0a535ef4b3e5f/2619597'
},
{
'product': 'https: //build.nvidia.com/mit/diffdock',
'description': '"Predicts the 3D structure of how a molecule interacts with a protein."',
'name': '"diffdock"',
'ual': 'did: dkg: otp: 20430/0x1a061136ed9f5ed69395f18961a0a535ef4b3e5f/2619643'
}
]

You’ll now be able to utilize the DKG to improve the runtime cost of the LLM model, as well as have it rely on trustable data stored in the Knowledge Assets.

Step 5 — Vector search with NVIDIA embed-qa-4 model and the DKG

In this step, you’ll build an in-memory vector DB based on the verified data queried from the DKG and invoke the NVIDIA model with it to generate more accurate results for the end-user. Sometimes, using SPARrQL queries may not be enough to answer a question, and you can use a vector database to extract specific Knowledge Assets by semantic similarity.

First, you initialize the NVIDIA embed-qa-4 model that you’ll use to generate the vector embeddings:

import requests

invoke_url = "https://ai.api.nvidia.com/v1/retrieval/nvidia/embeddings"

headers = {
"Authorization": f"Bearer {os.getenv('NVIDIA_API_TOKEN')}",
"Accept": "application/json",
}

def get_embeddings(input):
payload = {
"input": input,
"input_type": "query",
"model": "NV-Embed-QA"
}

session = requests.Session()

response = session.post(invoke_url, headers=headers, json=payload)

response.raise_for_status()
response_body = response.json()
return response_body["data"][0].embedding

Then, you build the vector DB in-memory by making embeddings based on the Product description:

from annoy import AnnoyIndex

def build_embeddings_index(embeddings, n_trees=10):
dim = len(embeddings[0])
index = AnnoyIndex(dim, 'angular') # Using angular distance

for i, vector in enumerate(embeddings):
index.add_item(i, vector)

index.build(n_trees)
return index

def add_text_embeddings(products):
for product in products:
product["embedding"] = get_embeddings([product["description"]])

add_text_embeddings(products)

Then, you can retrieve the Product that is semantically nearest to the user prompt, in order to generate the answer to his question with the following:

index = build_embeddings_index([product["embedding"] for product in products])
question = "I would like a model which will help me find the molecules with the chosen properties."

nearest_neighbors = index.get_nns_by_vector(get_embeddings(question), 1, include_distances=True)
index_of_nearest_neighbor = nearest_neighbors[0][0]

print(f"Vector search result: {products[index_of_nearest_neighbor]['description']}")
print(f"Product name: {products[index_of_nearest_neighbor]['name']}")
print(f"https://dkg.origintrail.io/explore?ual={products[index_of_nearest_neighbor]['ual']}")

The output will be similar to this:

Vector search result: Predicts the 3D structure of how a molecule interacts with a protein.
Product name: diffdock
https://dkg-testnet.origintrail.io/explore?ual=did:dkg:otp:20430/0x1a061136ed9f5ed69395f18961a0a535ef4b3e5f/2619643 Conclusion

You have now created a Python project which uses tools from the NVIDIA Build platform to help create and query verifiable Knowledge Assets on OriginTrail DKG. You’ve seen how to instruct it to generate SPARQL queries from Natural Language inputs and query the DKG with the resulting code, as well as how to create embeddings and use vector similarity search to find the right Knowledge Assets.

Additionally, you’ve explored the capabilities of the NVIDIA Build platform and how to use it with the DKG, offering versatile options for both structured data querying with SPARQL and semantic similarity search with vectors. With these tools at your disposal, you’re well-equipped to tackle a wide range of tasks requiring knowledge discovery and retrieval by using the decentralized RAG (dRAG).

Decentralized RAG with OriginTrail DKG and NVIDIA Build ecosystem was originally published in OriginTrail on Medium, where people are continuing the conversation by highlighting and responding to this story.


Oasis Open Projects

Building Trust in AI with Open Standards

Open standards in artificial intelligence (AI) are important for a number of reasons: Interoperability: Open standards allow different AI systems to work together seamlessly, regardless of who developed them or what platform they run on. This means that data and services can be shared across different systems, increasing efficiency and reducing costs. Innovation: Open standards […] The post Buil

By Francis Beland, Executive Director, OASIS Open

Open standards in artificial intelligence (AI) are important for a number of reasons:

Interoperability: Open standards allow different AI systems to work together seamlessly, regardless of who developed them or what platform they run on. This means that data and services can be shared across different systems, increasing efficiency and reducing costs.

Innovation: Open standards encourage innovation by providing a common framework for developers to work within. This can lead to the development of new AI tools and techniques that can benefit a wide range of users.

Transparency: Open standards can help increase the transparency of AI systems, making it easier for users to understand how they work and how they make decisions. This is particularly important in applications such as healthcare, finance, and legal, where transparency and accountability are critical.

Accessibility: Open standards can help make AI more accessible to a wider range of users, including those who may not have the resources to develop their own systems. This can help democratize access to AI technology and promote inclusivity.

Trust: Open standards can help build trust in AI by establishing a common set of ethical principles and technical standards that developers can adhere to. This can help address concerns around bias, privacy, and security, and promote responsible AI development and deployment.

The post Building Trust in AI with Open Standards appeared first on OASIS Open.


Hyperledger Foundation

Blockchain Pioneers: Hyperledger Burrow

As we laid out in our Helping a Community Grow by Pruning Inactive Projects post, there is an important life cycle to well governed open source projects. Since our launch in 2015, Hyperledger Foundation has hosted a number of now retired projects that helped drive innovation and advanced the development of enterprise-grade blockchain technologies. This series will look back at the

As we laid out in our Helping a Community Grow by Pruning Inactive Projects post, there is an important life cycle to well governed open source projects. Since our launch in 2015, Hyperledger Foundation has hosted a number of now retired projects that helped drive innovation and advanced the development of enterprise-grade blockchain technologies. This series will look back at the impact of these pioneering projects.


Trust over IP

Authentic Chained Data Containers (ACDC) Task Force Announces Public Review

A blueprint for creating truly decentralized, authentic, and verifiable ecosystems of identifiers, “credentials”, and attestations The post Authentic Chained Data Containers (ACDC) Task Force Announces Public Review appeared first on Trust Over IP.

The Authentic Chained Data Containers (ACDC) Task Force at the Trust Over IP Foundation is pleased to request public review of the following deliverables:

Key Event Receipt Infrastructure (KERI) specification  Authentic Chained Data Containers specification  Composable Event Streaming Representation specification 

Together, this suite of specifications provides a blueprint for creating truly decentralized, authentic, and verifiable ecosystems of identifiers, “credentials” [see footnote], and attestations.

The specifications describe a series of unique, innovative features:

Pre-rotation of keys, enabling truly unbounded term identifiers; Cryptographic root-of-trust; Chained “credentials” [see footnote] with fully verifiable proof of ownership and proof of authorship; A serialization format that is optimized for both text and binary representations equally with unique properties that support lookahead streaming for uncompromised scalability.

This suite of specifications contains additional sub-specifications including Out-Of-Band Introductions, Self-Addressing Identifiers and a revolutionary “path signature’ approach for signed containers required to provide a comprehensive solution for Organizational Identity.

With the launch of the vLEI Root of Official Trust this suite of specifications saw its first production deployment through the Python reference implementation in 2022.

The Task Force expects feedback to be provided by April 20, 2024 via GitHub issues on the following repositories using the ToIP Public Review Process:

https://github.com/trustoverip/tswg-keri-specification/issues https://github.com/trustoverip/tswg-acdc-specification/issues https://github.com/trustoverip/tswg-cesr-specification/issues

Licensing Information:

The Trust Over IP Foundation Technology Stack Working group deliverables are published under the following licenses:

Copyright mode: OWFa 1.0 (available at https://www.openwebfoundation.org/the-agreements/the-owf-1-0-agreements-granted-claims/owfa-1-0) Patent mode: OWFa 1.0 (available at https://www.openwebfoundation.org/the-agreements/the-owf-1-0-agreements-granted-claims/owfa-1-0) Source code: Apache 2.0 (available at http://www.apache.org/licenses/LICENSE-2.0.html)

Note: The Task Force considers “credentials” or “verifiable credentials” as termed by the W3C, only one use and a subset of ACDCs.

The post Authentic Chained Data Containers (ACDC) Task Force Announces Public Review appeared first on Trust Over IP.

Wednesday, 20. March 2024

Project VRM

Personal AI at VRM Day and IIW

Most AI news is about what the giants (OpenAI/Microsoft, Meta, Google/Apple, Amazon, Adobe, Nvidia) are doing (seven $trillion, anyone?), or what AI is doing for business (all of Forbes’ AI 50). Against all that, personal AI appears to be about where personal computing was in 1974: no longer an oxymoron but discussed more than delivered. […]

Prompt: A woman uses personal AI to know, get control of, and put to better use all available data about her property, health, finances, contacts, calendar, subscriptions, shopping, travel, and work. Via Microsoft Copilot Designer, with spelling corrections by the author.

Most AI news is about what the giants (OpenAI/Microsoft, Meta, Google/Apple, Amazon, Adobe, Nvidia) are doing (seven $trillion, anyone?), or what AI is doing for business (all of Forbes’ AI 50). Against all that, personal AI appears to be about where personal computing was in 1974: no longer an oxymoron but discussed more than delivered.

For evidence, look up “personal AI.” All the results will be about business (see here and here) or “assistants” that are just suction cups on the tentacles of giants (Siri, Google Assistant, Alexa, Bixby), or wannabes that do the same kind of thing (Lindy, Hound, DataBot).

There may be others, but three exceptions I know are Kin, Personal AI and Pi.

Personal AI is finding its most promoted early uses on the side of business more than the side of customers. Zapier, for example, explains that Personal AI “can be used as a productivity or business tool.”

Kin and Pi are personal assistants that help you with your life by surveilling your activities for your own benefit. I’ve signed up for both, but have only experienced Pit,” or “just vent,” when I ask it to help me with the stuff outlined in (and under) the AI-generated image above, it wants to hook me up with a bunch of siloed platforms that cost money, or to do geeky things (PostgreSQL, MongoDB, Python on my own computer. Provisional conclusion: Pi means well, but the tools aren’t there yet. [Later… Looks like it’s going to morph into some kind of B2B thing, or be abandoned outright, now that Inflection AI’s CEO, Mustafa Suleyman is gone to Microsoft. Hmm… will Microsoft do what we’d like in this space?]

Open source approaches are out there: OpenDAN, Khoj, Kwaai , and Llama are four, and I know at least one will be at VRM Day and IIW.

So, since personal AI may finally be what pushes VRM into becoming a Real Thing, we’ll make it the focus of our next VRM Day.

As always, VRM Day will precede IIW in the same location: the Boole Room of the Computer History Museum in Mountain View, just off Highway 101 in the heart of Silicon Valley. It’ll be on Monday, 15 April, and start at 9am. There’s a Starbucks across the street and ample parking because the museum is officially closed on Mondays, but the door is open. We lunch outdoors (it’s always clear) at the sports bar on the other corner.

Registration is open now at this Eventbrite link:

https://vrmday2024a.eventbrite.com

You can also just show up, but registering gives us a rough headcount, which is helpful for bringing in the right number of chairs and stuff like that.

See you there!

 


Elastos Foundation

Elastos Announces Partnership with IoTeX to Deliver Security and Access to DePIN Infrastructure

Elastos today announced a partnership with IoTeX, to deliver ID verification and validation services across the Decentralized Physical Infrastructure Networks (DePINs) specialists’ portfolio including DePINscan, DePINasset and W3bstream. DePINs lie at the intersection between crowd-sourced participation, funding and governance models and so-called Real World Assets (RWA) – tangible infrastructure s

Elastos today announced a partnership with IoTeX, to deliver ID verification and validation services across the Decentralized Physical Infrastructure Networks (DePINs) specialists’ portfolio including DePINscan, DePINasset and W3bstream.

DePINs lie at the intersection between crowd-sourced participation, funding and governance models and so-called Real World Assets (RWA) – tangible infrastructure such as buildings, equipment or other capital-intense assets.  They offer a mechanism to recruit and reward participants to maintain these assets, through the Blockchain.  When the latter is combined with a physical interface such as IoT, the contributions of these so-called physical ‘node managers’ can be tracked and, in turn, rewarded against tokens whose value itself increases with the development and use of the asset.

DePINscan provides a ready-to-use dashboard with essential visualizations for IoT projects and roll outs; while W3bstream is a decentralized protocol that connects data generated in the physical world to the Blockchain world.  IoTeX harnesses its innovative Roll-Delegated Proof of Stake (Roll-DPoS) consensus mechanism, designed to optimize speed and scalability for the seamless integration of IoT devices, while ensuring integrity throughout the entire process. Stakeholders cast their votes to elect specific block producers; block producers receive rewards for their contributions, which they subsequently share with the stakeholders who endorsed them.

Jonathan Hargreaves, Elastos’ Global Head of Business Development & ESG, describes the partnership as Web3 ‘next frontier’.

“Extending the benefits of the SmartWeb in terms of disintermediation, transparency and privacy into the physical domain is a logical but nonetheless exciting next step.  Our partnership with IoTeX means that entrepreneurs and businesses of any size will now have access to infrastructure that would otherwise be off limits to them, direct and on their terms.  This epitomizes Web3’s promise to level the playing field, thanks to its unique ability to ensure irrefutable identity proof which actually requires neither party to relinquish control of the same,” he says.

Raullen Chai, IoTeX’s co-founder and CEO, explains that DePINs permit an entirely new generation of businesses and entrepreneurs to access and monetize global infrastructure – from buildings to cabling, for instance – that otherwise would be prohibitively expensive or inaccessible.    

“Our partnership with Elastos represents an important milestone.  Extending our offering to the Elastos Smart Chain (ESC) offers some compelling advantages, including direct integration with ‘Layer 2’ Bitcoin, meaning that agreements can be embedded and reconciled direct in the World’s most popular and trusted digital currency.  This is an essential capability as DePINs become more mainstream,” he says. 

Interested in staying up to date? Follow Elastos here and join our live telegram chat.


DIF Blog

DIF's work on Interoperability Profiles

The challenge  Interoperability is a basic requirement for secure identity management and seamless communication between identity systems and services. However, in a world of multiple digital identity standards and protocols, interoperability doesn’t just happen ‘out of the box’.  Identity standards and protocols tend to

The challenge 

Interoperability is a basic requirement for secure identity management and seamless communication between identity systems and services.

However, in a world of multiple digital identity standards and protocols, interoperability doesn’t just happen ‘out of the box’. 

Identity standards and protocols tend to be flexible by design, entailing a range of decisions about how they should be implemented. 

Differences in business priorities, local regulations and how these are interpreted drive divergent implementations, making interoperability hard to achieve in practice.

This means that standards are a necessary, but not sufficient part of interoperability.

Interop Profiles: reducing optionality to enable interoperability

Interop profiles describe a set of specifications and other design choices to establish interoperability. These profiles specify items like

Data models and supported formats Protocols to transfer Verifiable Credentials (VCs) Which Decentralized Identifier (DID) methods must be supported  Supported revocation mechanism Supported signature suites

They also specify what’s out of scope, further reducing optionality and easing implementation. 

Profiles can be developed to achieve interoperability for a variety of needs in order to establish a trusted ecosystem.

Interop Profiles and Decentralized Identity

There is growing support for interoperability profiles that enable real-world applications of decentralized identity standards and technologies. 

For example, the US Department of Homeland Security (DHS) leads the Silicon Valley Innovation Program, which focuses (among other things) on digitization of trade documentation using Decentralized Identifiers and Verifiable Credentials. To prove interoperability, and help build confidence that the solution doesn’t result in vendor lockin, participants have developed profiles and interoperability test suites to ensure they are able to exchange and verify trade credentials. 

The International Air Transport Association (IATA) plays a similar role in ensuring interoperability within the travel supply chain (for example, when using verifiable credentials to onboard travel agents and intermediaries to an airline's agency portal). 

The Jobs for the Future Foundation has hosted a series of interoperability events (called “JFF Plug Fests”) to select profiles and develop test harnesses demonstrating that individuals can receive and share their credentials using their choice of conformant wallets, and that the flows work across conformant issuers and relying parties.

How DIF is working to make life easier for implementers 

The interoperability challenges highlighted in this article matter for our members. 

For one thing, it’s hard to build workable products, or viable ecosystems, on top of standards and protocols with divergent implementations.

There’s also a growing need for specific approaches to decentralized identity within different industries, regions, and use cases (such as the trade, travel and employment cases mentioned above). 

Interoperability is a core part of the Decentralized Identity Foundation (DIF)’s mission.

Which is why DIF has hosted collaborative work to develop robust interoperability profiles for a number of years. 

Examples include the JWT VC Issuance Profile, which describes the technical protocols, data formats, and other requirements to enable interoperable issuance of VCs from Issuers to Wallets (see https://github.com/decentralized-identity/jwt-vc-issuance-profile ), and the JWT VC Presentation Profile, which describes the technical protocols, data formats, and other technical requirements to enable interoperable exchange of VCs presentations between Wallets and Verifiers (see https://github.com/decentralized-identity/jwt-vc-presentation-profile ). 

Taking a closer look at these examples, the VC Data Model v1.1 defines the data model of Verifiable Credentials (VCs) but does not prescribe standards for transport protocol, key management, authentication, query language, et cetera. The same is true for DIDs.

A range of specifications are available, providing options for how these things (transport, key management, etc) are achieved, but if implementers have to support all possible specifications (and combinations), it would be a lot of work.

So a profile is a way to make choices and even restrictions for a certain use case, allowing all participants to establish interoperability.

Summary

Collaboration on interoperability is an essential part of the process of establishing a viable digital trust ecosystem. 

Interop profiles define specific requirements that must be followed by identity providers, relying parties, and other stakeholders.

DIF provides a neutral venue to collaborate on interop profile development. 

Together with our working group tools, best practices and IPR protection, and our members’ subject matter expertise in decentralized identity technologies, DIF is the destination of choice to host this work. 

Got a question? Email us - we’ll be happy to discuss your requirements. 


Velocity Network

Jen Berres & Mike Andrus on why HCA Healthcare is adopting verifiable digital credentials

On Mar. 8, 2024, HCA Healthcare’s Senior Vice President and Chief Human Resources Officer, Jen Berres, and Vice President of Operations and Technology, Mike Andrus, joined Velocity’s Co-founder and Head of Ecosystem, Etan Bernstein, to discuss the verifiable digital credential movement, the value to healthcare organizations in particular, and the opportunity to work together to solve for an HR cha

Identity At The Center - Podcast

A new episode of the Identity at the Center podcast is now a

A new episode of the Identity at the Center podcast is now available. This is a special Sponsor Spotlight episode, made in collaboration with our sponsor, Zilla Security. We had a great conversation with Deepak Taneja, CEO & Co-founder of Zilla Security, discussing a range of topics from how Zilla differentiates itself in the crowded IAM market to the role of Robotic Process Automation (RPA) i

A new episode of the Identity at the Center podcast is now available. This is a special Sponsor Spotlight episode, made in collaboration with our sponsor, Zilla Security. We had a great conversation with Deepak Taneja, CEO & Co-founder of Zilla Security, discussing a range of topics from how Zilla differentiates itself in the crowded IAM market to the role of Robotic Process Automation (RPA) in the identity lifecycle.

You can listen to this episode on our website, idacpodcast.com, or in your favorite podcast app. Don't miss it!

#iam #podcast #idac


Next Level Supply Chain Podcast with GS1

Future-Proofing Retail with RFID and 2D Barcodes with Sarah Jones Fairchild

Radio frequency identification (RFID) and 2D barcodes are transforming how we handle the supply chain.  Sarah Jones Fairchild, Vice President of Sales Operations at SWIM USA, talks 2D barcode applications for customer safety, efficiency in retail checkout, inventory management, and the broader implications for companies as they prepare for the technological demands of the future. Sarah expl

Radio frequency identification (RFID) and 2D barcodes are transforming how we handle the supply chain. 

Sarah Jones Fairchild, Vice President of Sales Operations at SWIM USA, talks 2D barcode applications for customer safety, efficiency in retail checkout, inventory management, and the broader implications for companies as they prepare for the technological demands of the future. Sarah explains the importance of high-quality data and the impact of incorrect data on consumers. She also touches on the potential for these technologies to address industry-specific needs and regulatory requirements. 

Sarah highlights her personal experience with tech at home and work, specifically how it helps align information for everyone. The discussion emphasizes the importance of GS1 standards for ensuring compatibility in the supply chain and the necessity of proper data management to fully leverage RFID and 2D barcode capabilities. The conversation levels supply chain tracking information for business owners of all types and why RFID can take a few years to implement. 

 

Key takeaways: 

Integrating RFID and 2D barcode technologies in supply chain operations is essential for improving accuracy and efficiency.

Data quality and management are challenging across industries, particularly with the need for high compatibility and usability standards.

Companies must embrace technologies such as RFID and 2D barcodes for the future.

 

Resources: 

What Is RFID Technology, and How Does It Work?

2D Barcodes: Changing the way you eat, shop, and live

Sunrise 2027: The Next Dimension in Barcodes

Enhance Your Supply Chain Visibility

 

Connect with GS1 US:

Our website - www.gs1us.org

GS1 US on LinkedIn

 

Connect with guest:

Sarah Jones Fairchild on LinkedIn

Check out SWIM USA


Digital Identity NZ

Hello Autumn: Let’s Dive into Serious Work Together

If the first two months of 2024 for Digital Identity NZ are anything to go by, this year is certainly turning out to be every bit as busy as 2023. The post Hello Autumn: Let’s Dive into Serious Work Together appeared first on Digital Identity New Zealand.

Kia ora,

If the first two months of 2024 for Digital Identity NZ are anything to go by, this year is certainly turning out to be every bit as busy as 2023. It is a different kind of busy, with more collaboration and partnership engagement needed to ‘get things done’ in the digital identity domain, against a backdrop of regulation and economic headwinds.

A couple of weeks ago, the year’s first Coffee Chat saw good attendance as did last month’s Air New Zealand and Authsignal sponsored webinar on passkeys. This exclusive member-only event shared how DINZ members, Authsignal and Air New Zealand worked together to deliver a world class implementation of passkeys to secure Air New Zealand’s customers accounts. Speaking of Authsignal, founder and DINZ Executive Council Member Justin Soong wrote this exceptional thought piece on AI published in Forbes last month. And DINZ member SSS – IT Security Specialists received this accolade!

Next week, members will receive a personal email from me seeking expressions of interest, particularly from digital ID service and attribute providers, to participate in an investigative sprint early next month from DINZ member PaymentsNZ’s. The aim is to surface the digital identity-related issues that people encounter in the payments industry, and develop best practice requirements to overcome them as part of PaymentsNZ’s Next Generation Payments programme. Stay tuned.

We kick off April with a lunchtime fireside chat; Digital Health Identity: History, current state and the future with two Te Whatu Ora specialists. There’s so much happening in this space. You can find out more and register here.

If you’re getting the impression that April is the month for digital identity, you’re correct! Tuesday 9 April is World Identity Management Day! While co-founded by the Identity Defined Security Alliance and the National Cybersecurity Alliance in the US in 2021, the day is recognised in many countries globally. In its fourth year, the 2024 Virtual Conference brings together identity and security leaders and practitioners from all over the world to learn and engage.

April is also a favourite time of year to publish research that helps to level-set our own strategies and plans, as DINZ did last year. This Australian research forwarded by a public sector member, would probably show similar results in NZ, as reflected in DINZ member InternetNZ’s insights research. And the EU digital wallet is taking shape as it aims to showcase a robust and interoperable platform for digital identification, authentication and electronic signatures based on common standards across the European Union. We hope to continue our research and additional initiatives for 2024, and we’re continually looking for support in the way of sponsorship from our members. Click here to find out how you can support DINZ’s research, and future ambitions.

Ngā mihi

Colin Wallis
Executive Director, Digital Identity NZ

Read the full news here: Hello Autumn: Let’s Dive into Serious Work Together

SUBSCRIBE FOR MORE

The post Hello Autumn: Let’s Dive into Serious Work Together appeared first on Digital Identity New Zealand.

Tuesday, 19. March 2024

Hyperledger Foundation

Why Hyperledger Besu is a Top Choice for Financial Use Cases

Hyperledger Besu has emerged as a preferred runtime for EVM-based financial initiatives worldwide. For projects like tokenization, settlements, CBDCs (Central Bank Digital Currencies), and trade finance, Besu stands out for its robust security features, versatility in network construction, performance, pluggability, and enterprise-friendly licensing and programming language.

Hyperledger Besu has emerged as a preferred runtime for EVM-based financial initiatives worldwide. For projects like tokenization, settlements, CBDCs (Central Bank Digital Currencies), and trade finance, Besu stands out for its robust security features, versatility in network construction, performance, pluggability, and enterprise-friendly licensing and programming language.

Monday, 18. March 2024

FIDO Alliance

Tech Telegraph: Best PC and laptop security accessories 2024

If you haven’t had the pleasure of using biometrics on a device for authentication through Windows Hello, you’re missing out. It’s much faster and easier than having to type in […]

If you haven’t had the pleasure of using biometrics on a device for authentication through Windows Hello, you’re missing out. It’s much faster and easier than having to type in your password.


Android Headlines: X Android App Beta Gets Password-less Passkeys Authentication Support

Passkeys enhance security by eliminating traditional passwords and relying on the interaction between Private and Public keys for user authentication, reducing the instance of phishing attacks and data breaches. Passkeys […]

Passkeys enhance security by eliminating traditional passwords and relying on the interaction between Private and Public keys for user authentication, reducing the instance of phishing attacks and data breaches. Passkeys are gaining traction among various platforms, including websites, gaming platforms, and Windows 11 apps.


The New Stack: 3 Steps to Make Logins with Passkeys Reliable

Passkeys offer modern and secure authentication by enabling cryptography-backed user authentication with a frictionless user experience. With users becoming more accustomed to passkeys, 2024 is the year to ditch passwords […]

Passkeys offer modern and secure authentication by enabling cryptography-backed user authentication with a frictionless user experience. With users becoming more accustomed to passkeys, 2024 is the year to ditch passwords and upgrade to passkeys with these considerations in mind.


Identity At The Center - Podcast

It’s time for the latest episode of the Identity at the Cent

It’s time for the latest episode of the Identity at the Center Podcast! We had the pleasure of welcoming back Andi Hindle, the Conference Chair for Identiverse, for an in-depth discussion about the planning and unique aspects of the Identiverse conference. We explore whether Identiverse is a Digital Identity conference or an IAM conference. Looking forward to an enlightening conversation? Listen t

It’s time for the latest episode of the Identity at the Center Podcast! We had the pleasure of welcoming back Andi Hindle, the Conference Chair for Identiverse, for an in-depth discussion about the planning and unique aspects of the Identiverse conference. We explore whether Identiverse is a Digital Identity conference or an IAM conference. Looking forward to an enlightening conversation? Listen to the full episode at idacpodcast.com or in your favorite podcast app.

#iam #podcast #idac

Friday, 15. March 2024

Identity At The Center - Podcast

Join us for a special Friday episode of The Identity at the

Join us for a special Friday episode of The Identity at the Center Podcast. We discussed the rapidly evolving world of Privileged Access Management with our guest Paul Mezzera. We talked about the driving forces behind these changes and what the future might hold. Listen to our conversation at idacpodcast.com or in your favorite podcast app. #iam #podcast #idac

Join us for a special Friday episode of The Identity at the Center Podcast. We discussed the rapidly evolving world of Privileged Access Management with our guest Paul Mezzera. We talked about the driving forces behind these changes and what the future might hold. Listen to our conversation at idacpodcast.com or in your favorite podcast app.

#iam #podcast #idac

Thursday, 14. March 2024

Berkman Klein Center

Accuracy, Incentives, Honesty: Insights from COVID-19 Exposure Notification Apps

The next pandemic response must respect user preferences or risk low adoption By Elissa M. Redmiles and Oshrat Ayalon Photo by Mika Baumeister on Unsplash Four years after COVID-19 was first declared a pandemic, policy makers, companies and citizens alike have moved on. The CDC no longer offers separate guidance for COVID-19. Apple and Google have shut down their exposure notificat

The next pandemic response must respect user preferences or risk low adoption

By Elissa M. Redmiles and Oshrat Ayalon

Photo by Mika Baumeister on Unsplash

Four years after COVID-19 was first declared a pandemic, policy makers, companies and citizens alike have moved on. The CDC no longer offers separate guidance for COVID-19. Apple and Google have shut down their exposure notification infrastructure, which was used heavily in the US and Europe. As COVID-19 spread, technologists were called to serve by building and deploying exposure notification apps to scale parts of the contact tracing process. These apps allowed users to report when they tested positive for COVID-19 and to notify other users when they had been in the vicinity of an infected user. But getting people to use exposure notification apps during the pandemic proved challenging.

More than three million lives have been lost to COVID-19 over the past four years. Any hope of losing fewer lives during the next pandemic rests on reflection: what did we do, what can we learn from it, and what can we do better next time? Here, we offer five key lessons-learned from research on COVID-19 apps in the US and Europe that can help us prepare for the next pandemic.

Privacy is important, but accuracy also matters

Privacy was the primary focus in early exposure notification apps, and rightfully so. The apps all trace their users’ medical information and movements in various ways, and may store some or all of that information in a central database in order to inform other users of potential infection. The misuse of this information could easily result in unintentional, or even intentional, harm.

However, research into whether (and how) people used exposure notification apps during the pandemic showed that privacy might not be the most important factor. People care about accuracy, or an app’s rate of incorrect reports of COVID-19 exposure (both false positives and false negatives), which may have also influenced rates of public app adoption. Yet, we still know little about how effective the deployed exposure notification apps were. Future apps will need to have measurement tools and methods designed into them before they are released to accurately track their usefulness.

We need to better understand the role of incentives

Researchers discovered that using direct incentives, such as monetary compensation, to get people to install exposure notification apps worked at first, but had little effect in the long term. In fact, one field study found that people who received money were less likely to still be using the app eight months later than those who didn’t. Paying people to download a contact tracing app is even less effective when the app is perceived to be bad quality or inaccurate. However, monetary incentives may be able to “compensate” when the app is perceived to be costly in other ways, such as eating up mobile data.

Given the ethical problems and lack of success with direct incentives, focusing on indirect incentives, such as functionality, may be key to increasing adoption. Exposure notification apps have the potential to serve a greater purpose during pandemics than merely exposure notification. Our research found that people using exposure notification apps wanted them to serve as a “one-stop-shop” for quick receipt of test results, information on the state of public health in their region, and assistance finding testing centers.

Future app design needs to examine user wants and expectations to ensure widespread adoption. This is hardly a new concept — every successful “fun” app begins with this user-centered model. Apps that provide these extra benefits to users will not only be better adopted, they will also see more frequent and prolonged use.

…Over a third of the Coronalert app users we interviewed believed that it tracked their location, despite repeated communications over the course of a year that it used proximity rather than location to detect possible exposures.

Honesty is the most effective communication strategy

Exposure notification apps are often framed to the public as having inherent individual benefits: if you use this app, you’ll be able to tell when you’ve been exposed to a disease. In reality, exposure notification apps have a stronger collective benefit of preventing the overall spread of disease in communities. Being honest with potential users about the true benefits is more effective than playing up the less significant individual benefit. When examining how to best advertise Louisiana’s exposure notification app, we found that people were most receptive to the app when its collectivistic benefits were centered.

Honesty and openness in privacy is also essential, especially when it comes to data collection and storage. Despite this transparency, however, people may still make assumptions based on false preconceptions or logic. For example, over a third of the Coronalert app users we interviewed believed that it tracked their location, despite repeated communications over the course of a year that it used proximity rather than location to detect possible exposures.

Integration with existing health systems is essential

There was a disconnect between COVID-19 exposure notification apps and public healthcare systems, even in countries with universal healthcare and government-supported apps. Belgium’s Coronalert app, for example, allowed users to receive their test results faster by linking their test to their app using a unique code. But, testing center staff were not trained on the app and failed to prompt users for that code. Not only was receiving test results a primary motivator in getting people to use the app; failing to link positive results to specific app users reduced the app’s efficacy.

This disconnect may be far greater in countries without universal healthcare or where exposure notification apps are privately created. In order for these apps to be effective, developers must collaborate with public health workers to develop a shared understanding of how testing centers operate, determine the information needed to provide accurate tracking, and decide on the best way to follow up on potential infections.

Resourcing technical capacity is critical

A wide range of exposure notification apps were developed to combat COVID-19, and by many different organizations. In the absence of immediate government action, many of the earliest efforts were led by universities or volunteer efforts. Academics developed the DP3T proximity tracing protocol, which guided Google and Apple’s development of exposure notification infrastructure for Android and iOS phones.

However, privatization of exposure notification infrastructure created an enormous potential for private medical and other information to fall into the hands of corporations who are in the business of big data. It also subjected exposure notification technology to private company’s rules (and whims).

Google and Apple released exposure notification infrastructure in April 2020 but did not release direct-to-user exposure notification functionality until later in the pandemic. This decision left the development of exposure notification apps to public health agencies that lacked the resources and technical capacity to do so. Volunteers stepped in to fill this void. For example, the PathCheck foundation developed exposure notification apps for 7 states and countries on top of the Google-Apple Exposure Notification infrastructure.

“…We need to eliminate these scattered responses, align incentives, and integrate the strengths and perspectives of public, private, and academic bodies to develop protocols, models, and best practices.”

While it is natural for universities to support the public good, and encouraging that private citizens volunteered so much of their time and resources to do so, they should not have to in the next pandemic. To respond to future pandemics, we need to eliminate these scattered responses, align incentives, and integrate the strengths and perspectives of public, private, and academic bodies to develop protocols, models, and best practices.

Applying the lessons learned

Building tech responsibly means not just considering privacy, but providing technology that respects user preferences. When people give up their data, they expect a benefit — be that a collective benefit, such as fighting a pandemic or helping cancer research, or an individual one. They likewise expect utility: apps that are accurate, achieve their goals, and provide an holistic set of features.

If we continue to build tech based on our assumptions of what users want, we risk low adoption of these technologies. And during times of crisis, such as this still-ongoing COVID-19 pandemic, the consequences of low adoption are dire.

Elissa M. Redmiles is a computer scientist specializing in security and privacy for marginalized & vulnerable groups at Georgetown University and Harvard’s Berkman Klein Center.

Oshrat Ayalon is a human-computer interaction researcher focusing on privacy and security at the University of Haifa.

Accuracy, Incentives, Honesty: Insights from COVID-19 Exposure Notification Apps was originally published in Berkman Klein Center Collection on Medium, where people are continuing the conversation by highlighting and responding to this story.


Elastos Foundation

Elastos Launches Grant Program to Accelerate Deployment of “Smart Bitcoin” Applications

Visit Destiny Calls Website Page! Today Elastos, a pioneer in blockchain technology announced the launch of its Destiny Calls Program. Elastos is the creator of BeL2, the first Bitcoin Layer 2 applying zero-knowledge technology to enable the direct development and management of  ‘Bitcoin-native’ smart contracts.  The new program is now welcoming applications from the digital […]

Visit Destiny Calls Website Page!

Today Elastos, a pioneer in blockchain technology announced the launch of its Destiny Calls Program. Elastos is the creator of BeL2, the first Bitcoin Layer 2 applying zero-knowledge technology to enable the direct development and management of  ‘Bitcoin-native’ smart contracts. 

The new program is now welcoming applications from the digital entertainment, gaming and leisure sector utilising Elastos’ decentralised infrastructure including BeL2 to deliver Bitcoin- denominated services and experiences. The initial cohort of 6 to 8 projects will be backed by up to 100,000 ELA in funding, equivalent to approximately $378,000 USD to kick start a new and non-invasive approach to Layer 2 solutions. The program is a key part of Elastos’ ongoing mission to accelerate the development of the user-controlled SmartWeb.

“With the recent launch of Elastos’ BeL2, innovators and entrepreneurs now have access to the functionality of layer 2 blockchains backed by the unparalleled security of Bitcoin,” said Jonathan Hargreaves, Global Head of Business Development & ESG at Elastos. “Bitcoin Layer 2 promises to unlock various applications that will underpin the SmartWeb and has fundamentally addressed some of the capacity and functionality restrictions that have hindered the mainstream adoption of the Bitcoin ecosystem. Destiny Calls will provide crucial initial funding for teams exploring the potential of BeL2 and Elastos’ other SmartWeb infrastructure, and will accelerate the transformation of the internet into a user-driven and interactive ecosystem.”

Projects will be selected by the Destiny Calls board and reviewed with support by QuestBook, the on-chain grant funding review and administration platform. The initial cohort will be focused on three sectors: digital entertainment, gaming and leisure. In addition to funding, as part of the program Elastos will provide marketing and technical support, along with mentorships to support grantees in reaching their program milestones. Interested applicants are encouraged to visit the Destiny Calls page here.

 

Elastos’ Bitcoin Layer2, BeL2

The launch of Destiny Calls, follows the recent launch of Elastos’ Bitcoin layer 2, BeL2. BeL2 is the first Bitcoin layer 2 to facilitate the creation, recognition, management and exchange of any Bitcoin-denominated smart contract directly between concerned parties, without workarounds like intermediaries, side chains or additional applications. BeL2 promises to unlock the SmartWeb, by providing unprecedented functionality to Bitcoin and is part of growing industry excitement and focus on unlocking layer 2 functionality on Bitcon after the significant growth of L2s in the Ethereum ecosystem.

 

Pilot Recipients Announced

As part of the launch, Elastos is confirming that BeatFarm will join Destiny Calls as an inaugural member, having successfully completed pilot projects with Elastos. Beatfarm is a Decentralised platform to give artists direct access to potential collaborators, promotors, producers and industry professionals on their own terms. In collaboration with Elastos, Beatfarm is working to enable artists to establish Smart Contracts on their own terms with the resulting contracts – eScriptions – secured and assured through Bitcoin that can be traded through a decentralised marketplace. 

“Beatfarm’s success as a pilot project perfectly illustrates the potential of BeL2 to create sustainable business models for decentralised Web3 experiences,” adds Jonathan. “ Beatfarm exemplifies our goal of supporting innovative ideas in digital entertainment, gaming and leisure through Destiny Calls.”

For more information, please visit the Destiny Calls Website plage.


MOBI

MOBI Joins eSTART Coalition to Help Modernize Automotive Title and Registration Processes

MOBI Joins eSTART Coalition to Help Modernize Automotive Title and Registration Processes Coalition Advocates for Improvements to Streamline Auto Transactions Los Angeles — 14 March 2024. MOBI, a global nonprofit Web3 consortium, is excited to announce its participation in the Electronic Secure Title and Registration Transformation (eSTART) [...]

MOBI Joins eSTART Coalition to Help Modernize Automotive Title and Registration Processes

Coalition Advocates for Improvements to Streamline Auto Transactions

Los Angeles — 14 March 2024. MOBI, a global nonprofit Web3 consortium, is excited to announce its participation in the Electronic Secure Title and Registration Transformation (eSTART) Coalition as a founding member. eSTART is a group of leading auto industry organizations united in advocating for modern solutions to replace the paper-based processes that currently dominate state and local DMV operations.

The eSTART Coalition focuses on three key areas of vehicle transactions:

Permitting electronic signatures on all title and registration documents; Adopting tools for electronic submission and processing of title and registration; and Enabling electronic vehicle records transfers.

Modernizing these processes will result in significant cost and time savings for consumers, state and local DMV operations, and industry participants.

Across the U.S., countless titling/registration service providers maintain unique databases and processes for vehicle registration and titling. While some of these jurisdictions have begun digitizing certain processes, many rely entirely on paper-based and manual workflows. This fragmented approach presents several pain points for Motor Vehicle Authorities (MVAs), private sector participants, and consumers, including:

Lack of standardized processes leading to inconsistencies in data management and accessibility. Incurrence of substantial costs associated with paper-based systems, including storage, processing, and handling. Prolonged processing times and increased risk of errors due to manual verification processes. Missed opportunities for cost savings, efficiency gains, and enhanced customer experiences.

Addressing these pain points requires a solution that can be easily adopted across all jurisdictions rather than a solution that functions at a state, county or municipal jurisdiction level. MOBI and its members are collaborating on a Web3-enabled standardized solution to enhance efficiency and cross-border regulatory compliance in MVA operations with an interoperability framework rooted in self-sovereign data and identities. This unified framework serves as a common language, enabling organizations with diverse business processes and legacy systems to efficiently coordinate in a standardized manner without having to build and maintain new infrastructure.

The implementation of a standardized Web3 ecosystem offers a promising solution to streamline operations, increase efficiency, reduce costs, and greatly improve data permissioned-only-access. The ability to verify identities and transactions in a decentralized way can reduce odometer and titling fraud, eliminate the need for manual identity verification, improve insurance products, and enable more seamless remote transactions (e.g. online sales and road usage charging).

“We’re excited to be part of a coalition that not only shares our vision for a more streamlined and modern automotive industry but is actively working towards making it a reality,” said Tram Vo, MOBI CEO and Co-Founder. “MOBI and its members are proud to bring a unique Web3 standardized approach to this groundbreaking endeavor. Together, we’re setting the stage for a more efficient, interoperable ecosystem that empowers stakeholders through enhanced trust and data privacy for all.”

Other transportation industry organizations, including government agencies, industry partners, and associations, are encouraged to join the eSTART Coalition to advocate for these important changes. For more information about eSTART, please visit www.estartcoalition.org or contact info@estartcoalition.org.

About MOBI

MOBI is a global nonprofit Web3 consortium. We are creating standards for trusted self-sovereign data and identities (e.g. vehicles, people, businesses, things), verifiable credentials, and cross-industry interoperability. Our goal is to make the digital economy more efficient, equitable, decentralized, and sustainable while preserving data privacy for users and providers alike. For additional information about joining MOBI, please visit www.dlt.mobi.

About eSTART Coalition

The Electronic Secure Title and Registration Transformation (eSTART) Coalition is a united group of leading automotive organizations committed to modernizing and streamlining automotive title and registration processes. eSTART focuses on advocating for the implementation of efficient technology solutions to replace the paper-dependent systems currently used by DMVs. Through collective advocacy and action at the local and national levels, the coalition aims to drive significant improvement in automotive industry processes in ways that benefit all customers, DMVs and industry participants.

For more information, please visit www.estartcoalition.org.

Media Contact: Grace Pulliam, MOBI Communications Manager

Email: grace@dlt.mobi | Twitter: twitter.com/dltmobi | Linkedin: MOBI

The post MOBI Joins eSTART Coalition to Help Modernize Automotive Title and Registration Processes first appeared on MOBI | The New Economy of Movement.

Wednesday, 13. March 2024

Elastos Foundation

ELA: The Queen of Bitcoin

Bitcoin transformed finance by deploying blockchain technology, a decentralised system that replaces central authority with cryptographic trust. At its heart lies the Proof of Work (PoW) consensus algorithm, where miners expend computational energy to compete and solve complex mathematical problems, securing the network and validating transactions for BTC rewards. This model reflects the natural co

Bitcoin transformed finance by deploying blockchain technology, a decentralised system that replaces central authority with cryptographic trust. At its heart lies the Proof of Work (PoW) consensus algorithm, where miners expend computational energy to compete and solve complex mathematical problems, securing the network and validating transactions for BTC rewards.

This model reflects the natural competition for survival, akin to trees vying for sunlight, businesses vying for market dominance, individuals competing for a mate or the dynamics between predators and prey —each process governed by the relentless pursuit of energy and dominance.

Bitcoin’s hashrate represents its own competitive edge in the digital realm. This hashrate, a staggering 595.79 EH/s, signifies a computational battle much like those found in nature, but on a scale that dwarfs the combined power of the world’s supercomputers, underscoring the network’s unmatched security and the near-impossibility of overpowering it.

PoW elevates beyond a simple mechanism, integrating nature’s laws into the digital domain to fortify Bitcoin’s network through electricity, a tangible, physical cost. Bitcoin—becoming the unchallenged cornerstone of digital finance, offers a decentralised alternative that empowers individuals with financial sovereignty and freedom from central authority. It provides a secure, transparent, and accessible financial system for everyone, regardless of location or status.

 

Satoshi’s Vision for Merged Mining

 

 

Merged mining, or Auxiliary Proof of Work (AuxPoW), allows two different blockchains to use the same consensus mechanism. Miners can mine blocks on both chains simultaneously, submitting proof of their work to both networks. The key is that the ‘child’ blockchain, while independent in transactions and storage, relies on the ‘parent’ blockchain’s PoW for its security.

The concept of merged mining was introduced in a Bitcoin forum post by Satoshi Nakamoto in 2010, discussing the possibility of a new service called BitDNS to be mined simultaneously with Bitcoin. Satoshi proposed that by allowing miners to work on both chains at once, without extra effort or splitting the mining community, both networks could benefit from increased security and efficiency. The benefits include:

Economic Assurance of Security: Merged mining with Bitcoin means a ‘child’ blockchain’s security is underwritten by the considerable economic cost of Bitcoin mining. This straightforwardly leverages the existing, well-established energy expenditure of Bitcoin for maximum security with no additional complexity. Resource Optimisation and Environmental Consideration: Utilising Bitcoin’s existing mining infrastructure, merged mining does not require extra energy, making it an efficient and environmentally considerate approach to securing a blockchain. Scalability through Proven Infrastructure: By tapping into Bitcoin’s vast network of miners, merged mining scales a ‘child’ blockchain’s security with the growth of Bitcoin’s network.

Merged mining showcases efficiency and symbiosis, much like the natural cooperation in mycorrhisal networks, bees’ cross-species pollination, and mutualistic relationships between birds and mammals. It mirrors human ingenuity in leveraging established resources, such as start-ups utilising corporate infrastructures and solar panels or trees harnessing the sun’s energy, emphasising the smart utilisation of existing networks to bolster security and growth without additional expenditure.

Notably, Namecoin, one of the first to adopt this with Bitcoin, aims at decentralising domain-name registration. While Dogecoin, known for being merge mined, actually pairs with Litecoin due to the shared Scrypt algorithm, not Bitcoin. Myriadcoin’s unique approach supports multiple algorithms, including SHA-256, making it compatible with Bitcoin. Syscoin and Elastos also leverage Bitcoin’s hash power for enhanced security through merge mining.

 

Elastos and Bitcoin Merged Mining

Elastos, which began with the vision of creating a secure, decentralised internet, incorporated merged mining with Bitcoin in 2018. BTC.com helped mine its first block, and today, its network and currency ELA benefits from over 50% of Bitcoin’s mining security. So, what does this mean?

Elastos Utilises the Strongest Proof of Work Security Model in Existence: By merged mining with Bitcoin, Elastos capitalises on the most extensive PoW network, inheriting Bitcoin’s unparalleled security attributes. This symbiotic relationship means Elastos’ blockchain integrity is as robust as Bitcoin’s, mitigating risks without directly vying for Bitcoin’s mining resources. Elastos Has Achieved an Energy-Efficient Design Without Compromising Security: Energy efficiency is a major concern in cryptocurrency mining. Elastos adds transaction and block validation on its network by piggybacking on the work done by Bitcoin miners, thus maintaining high security with no additional energy requirements. This model serves as a case study in eco-conscious blockchain design. Elastos Offers a Unique Combination of a Decentralised Operating System with Bitcoin-Level Security: Unlike conventional blockchains, Elastos is a fully-fledged operating system for decentralised applications, secured by a blockchain layer. By integrating Bitcoin’s hash power through merged mining, it ensures a fortified environment for running dApps, differentiating itself significantly from competitors. Elastos Is Pioneering the True Decentralised Internet Backed by the Robustness of Bitcoin’s Network: Elastos’ aim to revamp the internet structure into a truly decentralised form is ambitious. By aligning its consensus mechanism with that of Bitcoin, it anchors its network to the tried-and-tested resilience of Bitcoin’s mining power, driving forward a new paradigm for digital communication and interaction. Elastos’s Ecosystem Is Designed to be Self-Sustaining and Independent, Yet Benefits Directly from Bitcoin’s Continued Growth: The design of Elastos’s ecosystem ensures it remains autonomous. As Bitcoin’s network expands and becomes more secure, Elastos indirectly benefits from these enhancements, bolstering its own proposition without the need for additional investment in security. Elastos May Be the Most Direct Implementation of Satoshi Nakamoto’s Vision for Merged Mining: Elastos’s use of merged mining is arguably a direct reflection of Satoshi’s initial musings on the subject. Its broad strategic outlook that includes an operating system, a carrier network, and SDKs for developers, all secured by the hash rate of Bitcoin, makes it a comprehensive and multidimensional implementation of the concept.

 

BTC’s Queen

Elastos, by merging mining with Bitcoin, can be likened to a queen in the chess game of digital finance, where Bitcoin holds the position of king. Just as a queen’s versatility and power are essential for protecting the king and dominating the board, Elastos’ integration with Bitcoin’s security framework amplifies the ecosystem’s resilience and innovation and gives it’s own ecosystem a plethora of utility. This includes:

Transaction Fees: ELA powers Elastos by covering transaction fees, including smart contracts and asset registrations, ensuring network security and efficiency. Digital Asset Exchange: ELA fuels a decentralised economy in Elastos, enabling direct trade of digital assets and services, cutting out middlemen. Incentive Mechanism: ELA rewards participants, including miners who secure the network via merge mining with Bitcoin, enhancing security and sustainability. Governance: Holding ELA grants governance rights, allowing stakeholders to vote on network decisions through the Cyber Republic, promoting community-driven development. Decentralised Applications (DApps): ELA is essential for using DApps on Elastos, providing access to a broad range of services and expanding the ecosystem’s functionality.

Together, Bitcoin and Elastos form a formidable duo, combining the steadfast security of the king with the dynamic reach and versatility of the queen, setting the stage for a future where digital finance is both secure and boundlessly innovative. What’s more, Elastos is developing BeL2, the Bitcoin Elastos Layer 2 protocol, allowing EVM smart contracts to run directly on top of Bitcoin, a scalable BitVM innovation. What if such services enable anyone with their decentralised wallet the ability to generate their own Bitcoin-backed algorithmic stablecoins, free from censorship? If Bitcoin introduces the concept of “Be Your Own Bank,” what if Elastos can expand on the idea to “Be Your Own Central Bank?”, both secured in POW. This could drastically disrupt finance as we know it.

Interested in staying up to date? Follow Elastos here and join our live telegram.


Hyperledger Foundation

Hyperledger Mentorship Spotlight: Aries-vcx based message mediator

The world of technology has seen significant developments over the past few decades, largely driven by advancements in cryptography. These advancements have led to innovations including secure internet traffic through HTTPS and WireGuard; protected data storage via BitLocker, LUKS, and fscrypt; decentralized consensus records using Bitcoin and Ethereum; and privacy-focused messaging pr


The world of technology has seen significant developments over the past few decades, largely driven by advancements in cryptography. These advancements have led to innovations including secure internet traffic through HTTPS and WireGuard; protected data storage via BitLocker, LUKS, and fscrypt; decentralized consensus records using Bitcoin and Ethereum; and privacy-focused messaging protocols like Signal and MLS (Messaging Layer Security).

However, despite these advances, our online identities remain controlled by third parties, whether we sign in to apps using Google or Facebook OpenID or manage "verified" accounts on platforms such as Twitter or Instagram. An emerging movement seeks to change this status quo by harnessing the transformative power of cryptography. Governments are also starting to recognize the value of self-sovereign identity (SSI)—a system in which individuals retain full control of their own digital identities.


MyData

Open position: Legal and policy specialist/ ecosystems specialist

Job title:  Legal and policy specialist / ecosystems specialistEmployment type:  Fixed contractContract duration:   March 2024 through 31 March 2026, with opportunity for renewal.Location: Remote, based in the EU and with a preference for Oslo, or Helsinki. Reports to: Executive Director Role description   The ecosystems specialist is responsible for advancing MyD
Job title:  Legal and policy specialist / ecosystems specialistEmployment type:  Fixed contractContract duration:   March 2024 through 31 March 2026, with opportunity for renewal.Location: Remote, based in the EU and with a preference for Oslo, or Helsinki. Reports to: Executive Director Role description   The ecosystems specialist is responsible for advancing MyData’s work to facilitate the emergence of […]

Tuesday, 12. March 2024

MOBI

Standardized Web3 Solution for Vehicle Registration, Titling, and Liens

Standardized Web3 Solution for Vehicle Registration, Titling, and Liens Stay tuned for updates! About Our Web3 Cross-Industry Interoperability Pilots Alongside our global community, we’ve demonstrated several potential use cases for Citopia and Integrated Trust Network (ITN) services through various pilot projects. Together, Citopia and the [...]

Standardized Web3 Solution for Vehicle Registration, Titling, and Liens

Stay tuned for updates!

Toggle Navigation Get Involved MOBI Standards Citopia Integrated Trust Network About Our Web3 Cross-Industry Interoperability Pilots

Alongside our global community, we’ve demonstrated several potential use cases for Citopia and Integrated Trust Network (ITN) services through various pilot projects. Together, Citopia and the ITN provide the necessary infrastructure for node operators to build out secure, seamless, globally compliant web services and applications. MOBI membership is required to operate a node on Citopia and/or the ITN. Contact us to learn more about becoming a node operator

Overview of the Pilot and the Problem It Solves

Across the United States, there is a diverse array of jurisdictions (numbering in the thousands across states, counties, and municipalities) and titling/registration service providers, each maintaining unique databases and processes for vehicle registration and titling. Many states (AZ, DE, GA, FL, LA, MA, MD, NC, SC, PA, VA, and WI) currently mandate the use of electronic lien and title (ELT) systems. Other states have planned ELT mandates in 2024, or more generally are developing a digital approach to electronic vehicle titling. For example, New York and Idaho have or are developing processes for dealer reassignments electronically.

Each of these jurisdictions will maintain their own systems for these varied processes. The challenge lies in achieving interoperability between those systems through standardized communications and data reporting/exchange across jurisdictional, platform, and organizational lines while enabling each jurisdiction to maintain control over its processes. For example, today, each vehicle manufacturer or lender can have hundreds of unique identifiers assigned to them by different jurisdictions, creating confusion, mismanagement, and inefficiency.

Currently, secure digital authentication and communication rely on identifiers issued by centralized platforms to prove their credentials. However, in addition to being vulnerable to fraud, identity theft, and data leaks, centralized approaches to identity management fail to address the trust problems created by the rise of decentralized services, IOT, and Generative AI. As digitization advances, it will become increasingly challenging — and costly — to verify data authenticity, secure digital perimeters, and ensure cross-border regulation compliance. This is critical for state agencies like MVAs as well as dealers and lenders, who are responsible for executing the bulk of the registration/titling process.

Stakeholders: Vehicle Manufacturers (OEMs); Financial Institutions (FIs)/Lenders; Servicers; Dealerships; Motor Vehicle Authorities (MVAs)/Third-party Registration/Titling Providers (RTPs); State Authorized Inspectors; Third-Party Data Consolidators; Fleet Operators; Trade Associations; Vehicle Auctions; and Consumers. Our Innovative Solution

Overcoming these challenges calls for a new solution. The White House’s Federal Zero Trust Strategy (2022) mandates that federal agencies and any organization that works with the federal government adopt a Zero Trust framework by the end of FY 2024. Zero Trust requires every entity to authenticate and validate every other entity for every single digital communication at all times. Since this is not possible at scale through Web2/centralized means, Web3 technologies and principles must be leveraged.

MOBI and its members have developed platform-agnostic standardized “universal translators” that work with any existing legacy system or web service to enable cross-industry interoperable communication through World Wide Web Consortium (W3C) decentralized identity and verifiable credential framework, called Citopia Passports (Web3 Plug-and-Play). Citopia Passports ensure that organizations’ and customers’ data privacy, which is key for complying with comprehensive data privacy laws being passed by many states (e.g., CA, CT, OR, TX, UT, VA).

Explore Cross-Industry Interoperability Requirements

Interested in learning more? Dive deeper on our Web3 Infrastructure Page!

Zero Trust Authentication: Cross-industry interoperability requires claims and identities to be verified for each transaction to ensure maximum security. Read the Federal Zero Trust Strategy

Infosec & Selective Disclosure: Participants must be able to selectively disclose information for transactions at the edge. Verification must be done at the moment of transaction to eliminate the need for PII storage.

Scalability and Extensibility: Cross-industry interoperability requires a shared standards-based framework to enable the creation of globally scalable multiparty applications.

Data Privacy Compliance: Cross-industry interoperability requires (1) compliance with existing global data privacy regulations and (2) the flexibility to comply with future directives.

Global Standards: Cross-industry interoperability requires a standardized system for frictionless data exchange and collaboration while allowing stakeholders to retain their legacy systems.

Decentralization: Cross-industry interoperability requires a community-owned and -operated infrastructure to (1) prevent monopolization and (2) enable consensus-based trust.

Web3 Plug-and-Play

Citopia Passports utilize W3C Decentralized Identifiers (DIDs) and Verifiable Credentials (VCs) standards. This creates an interoperability framework that provides:

Explore Citopia Passports

Decentralized, trusted identities

Digital credential issuance/verification

Interoperable communication between each stakeholder’s centralized databases

A bridge between jurisdictions, organizations, and platforms allowing each stakeholder to keep their legacy systems

The result is the reduction of errors, streamlined operations, increased efficiency, and reduced costs, as well as greatly improved data permissioned-access. More generally, this cross-industry, platform-agnostic, universal interoperability is part of what has motivated government interest worldwide in implementing and adopting standards-based digital identity and credential systems (e.g., the Department of Homeland Security (DHS) in the US; European Union Agency for Cybersecurity (ENISA) and European Self-Sovereign Identity Framework (ESSIF) in the EU).

Proposed Stakeholder Meeting

MOBI is proposing a two-part meeting in the first half of 2024: part one being a meeting between the association stakeholders (e.g. AAMVA, NADA, ATAEs, NIADA, AFSA, MOBI) and their representative members, and part two being a meeting including the titling service providers. The goals of the meeting are:

to bring together the key stakeholders to assess the pain points, needs/requirements, and path forward to achieve interoperability between the numerous centralized systems for registration/titling to jointly address the opportunity to develop standardized communication between each stakeholder to achieve interoperability for registration/titling processes to discuss how secure, verifiable digital identifiers and claims (using open-standard Web3 technologies) can address fundamental problems, such as each lender having hundreds of different identifiers assigned to them by different jurisdictions to finalize the scope and scale of the Standardized Web3 Solution for Titling/Registration Pilot Pilot Planning

In Phase 1 of the Pilot, the FSSC WG will demonstrate privacy-preserving cross-industry interoperability for Titling/Registration via standardized universal identifiers and communication/claims without the need to build new infrastructure. This will involve working with MVAs, lenders, dealers, OEMs, and service providers to demonstrate interoperability across different legacy systems and jurisdictions. At the end of Phase 1, stakeholders will have successfully created Citopia Passports and be able to use their Citopia Passport to easily authenticate each other’s identifiers and claims (such as lien release, odometer disclosures, insurance validation, etc.). Stakeholders will be able to examine the code and outputs to verify that all transactions/communications are private and only visible to the intended recipient.

In Phase 2 of the Pilot, each stakeholder will have the opportunity to run nodes, conduct research and development for their own applications, and actively participate in the pilot for a duration of 6-12 months. The FSSC WG will determine the final scope of Phase 2 after the conclusion of Phase 1.

MOBI WEB3 INFRASTRUCTURE

Explore the Future of
Cross-Industry Interoperability

Together, Citopia and the Integrated Trust Network (ITN) form our federated Web3 infrastructure for verifiable identity, location, and business automation. Learn more

JOIN MOBI

Learn How Your Organization Can Get Involved

Join our community to help shape the future of interoperability, accelerate the adoption of cutting-edge tech, and define a new era of digital trust! Submit an inquiry

Dive Deeper

Interested in learning more about MOBI, our community-owned and operated Web3 Infrastructure, and our interoperability pilots? Contact us at connect@dlt.mobi to get in touch with the team!

Get Involved

The post Standardized Web3 Solution for Vehicle Registration, Titling, and Liens first appeared on MOBI | The New Economy of Movement.

Monday, 11. March 2024

OpenID

Notice of Vote for Proposed Implementer’s Draft of OpenID for Verifiable Credential Issuance

The official voting period will be between Monday, March 25, 2024 and Monday, April 1, 2024, once the 45 day review of the specification has been completed. For the convenience of members who have completed their reviews by then, voting will actually begin on Monday, March 18, 2024. The OpenID Connect Working Group page is […] The post Notice of Vote for Proposed Implementer’s Draft of OpenID fo

The official voting period will be between Monday, March 25, 2024 and Monday, April 1, 2024, once the 45 day review of the specification has been completed. For the convenience of members who have completed their reviews by then, voting will actually begin on Monday, March 18, 2024.

The OpenID Connect Working Group page is https://openid.net/wg/connect/. If you’re not already a member, or if your membership has expired, please consider joining to participate in the approval vote. Information on joining the OpenID Foundation can be found at https://openid.net/foundation/members/registration.

The vote will be conducted at https://openid.net/foundation/members/polls/328.

The post Notice of Vote for Proposed Implementer’s Draft of OpenID for Verifiable Credential Issuance first appeared on OpenID Foundation.


Identity At The Center - Podcast

In the latest episode of the Identity at the Center Podcast,

In the latest episode of the Identity at the Center Podcast, we had the pleasure of speaking with Nick Mothershaw, Chief Identity Strategist at the Open Identity Exchange (OIX). We discussed the concept and functionality of digital wallets, the role of governments in issuing these wallets, and the future of smart and roaming wallets. This was a truly fascinating conversation, and I'm sure you'll f

In the latest episode of the Identity at the Center Podcast, we had the pleasure of speaking with Nick Mothershaw, Chief Identity Strategist at the Open Identity Exchange (OIX). We discussed the concept and functionality of digital wallets, the role of governments in issuing these wallets, and the future of smart and roaming wallets. This was a truly fascinating conversation, and I'm sure you'll find it as insightful as we did. If you're interested in the evolving landscape of identity security, this is one episode you don't want to miss!

You can listen to the episode at idacpodcast.com or on your favorite podcast app.

#iam #podcast #idac

Friday, 08. March 2024

FIDO Alliance

TeleMedia Online: Should All Mobile Business Apps Scrap Passwords and Integrate Biometrics?

Now that all the most advanced mobile devices on the market offer biometric authentication, it’s a good opportunity for apps to align with this and integrate it. FIDO Alliance reported that around […]

Now that all the most advanced mobile devices on the market offer biometric authentication, it’s a good opportunity for apps to align with this and integrate it. FIDO Alliance reported that around 80 percent of data leaks are linked to passwords, so it would be useful for a better alternative to become more widespread.


Security Magazine: Cyber Insights 2024: A Dire Year for CISOs?

“CISOs are too often overlooked or low on resources, funding and/or business support to properly implement change,” adds Andrew Shikiar, executive director at FIDO. “Resting the legal liability on one […]

“CISOs are too often overlooked or low on resources, funding and/or business support to properly implement change,” adds Andrew Shikiar, executive director at FIDO. “Resting the legal liability on one individual is overlooking the vacuum of responsibility and engagement at the top of organizations that is preventing meaningful change and true cyber resilience.”


Biometric Update: FIDO Alliance ensures long-term value of its specifications in post quantum era

The FIDO Alliance is actively involved in integrating PQC into its standards to ensure long-term efficacy and security, forming working groups to understand the implications and develop migration strategies. With […]

The FIDO Alliance is actively involved in integrating PQC into its standards to ensure long-term efficacy and security, forming working groups to understand the implications and develop migration strategies. With the addition of Prove Identity to its Board of Directors, the coalition continues its mission to shaping future standards for identity authentication.


Engadget: 1Password adds passkey support for Android

Passkey adoption is on the rise, showcased by 1Password’s support of passkeys for Android devices to provide a more secure alternative to traditional passwords through the use of public and […]

Passkey adoption is on the rise, showcased by 1Password’s support of passkeys for Android devices to provide a more secure alternative to traditional passwords through the use of public and private keys.


Human Colossus Foundation

Securing Your Digital Future: A Three-Part Series on Enhanced Privacy through Data Protection - Part 1

In 'Securing Your Digital Future,' Part 1 of this three-part series unveils the pivotal role of the Blinding Identity Taxonomy (BIT) and its Supplementary Document in fortifying data privacy. Emphasizing the critical need to protect sensitive personal data, we explore the foundation of data semantics—bolstered by the BIT framework crafted by the Human Colossus Foundation and backed by Kantara
Part 1: Understanding the Semantic Foundation of Privacy: The Critical Role of BIT and Its Supplementary Document in Data Protection

In the rapidly evolving digital landscape, the significance of data protection has never been more pronounced. Recent developments, such as the presidential order issued by the White House on February 28th, 2024, to prevent access to sensitive personal data by overseas 'bad actors,' underscore the urgency of safeguarding personal information from exploitation. This context sets the stage for a pivotal conversation on protecting sensitive data from a data semantics perspective—the cornerstone of understanding and interpreting data correctly across diverse systems and stakeholders.

Data semantics supports data interpretability, clarity, and consistency in the digital realm. It includes utilizing data models, vocabularies, taxonomies, ontologies, and knowledge representation to accurately recognize and interpret Personally Identifiable Information (PII) and sensitive data, ensuring that digital entities comprehend the sensitivity of this information, irrespective of their domain. The Blinding Identity Taxonomy (BIT) emerges as a beacon of guidance in data protection, supporting the fight against intrusive surveillance, scams, blackmail, and other privacy violations.

Celebrating the BIT and Its Evolution

Developed by the Human Colossus Foundation (HCF) and supported by Kantara Initiative, the BIT provides a robust framework for identifying and flagging sensitive information within data sets. Its purpose is not just to adhere to privacy laws such as GDPR and CCPA but to fortify the semantic understanding of what constitutes 'sensitive data.' The BIT involves a nuanced comprehension of data attributes that, if mishandled, could lead to privacy breaches or misuse.

With notable contributions from Paul Knowles, Chair of the HCF Decentralised Semantics WG, the BIT Supplementary Document significantly enhances the comprehension of the taxonomy. As an active contributor to the Dynamic Data Economy (DDE), HCF transferred the intellectual property rights of the newly released BIT Supplementary Document on December 13th, 2023, to Kantara Initiative, a global community focused on improving the trustworthy use of identity and personal data. Although not yet incorporated into regulations like GDPR, CCPA, or similar national regulations as an official appendix, the BIT Supplementary Document's publication as an official Kantara Initiative report on March 5th, 2024, significantly enhances the BIT's utility by offering detailed insights into the BIT categories.

The release of the BIT Supplementary Document marks a significant advancement in this journey. Offering detailed insights into the 49 BIT categories, it serves as an indispensable manual for practitioners aiming to navigate the complexities of data protection. It not only enumerates what constitutes sensitive information but also elaborates on how to interpret and handle this data, ensuring semantic integrity across systems. The BIT is the world's most comprehensive taxonomy for preventing re-identification attacks, with the Supplementary Document adding further depth and clarity.

Flagging Sensitive Attributes: A Semantic Safeguard

As the BIT report recommends, flagging sensitive attributes in a schema capture base is a practice rooted in semantic precision. This approach enables data protection officers and schema issuers to identify elements that demand cryptographic encoding, thereby minimizing the risk of re-identifying a data principal, where flagging acts as semantic annotation, marking data with an additional layer of meaning—its sensitivity or risk level, which aids in compliance with data protection regulations and enhances the semantic coherence of data handling practices.

By utilizing the BIT and its Supplementary Document, practitioners have a common guideline for determining which attributes to flag. This standard practice ensures that sensitive data is understood and interpreted consistently, avoiding ambiguities that could lead to data breaches. The BIT framework empowers practitioners to embed data protection principles directly into their semantic models, making privacy a foundational aspect of data interpretation.

Conclusion: The Semantic Imperative for Data Protection

In a digitally interconnected world, we cannot overstate the importance of data semantics as we navigate the complexities of data protection. The BIT and its Supplementary Document offer a comprehensive framework for understanding and protecting sensitive data, grounding data protection in semantic precision. As we move forward, we encourage individuals, organizations, and ecosystems to embrace these tools, ensuring that sensitive information is flagged, protected, and interpreted carefully.

BIT Supplementary Document

The BIT and its Supplementary Document enrich our toolkit for privacy preservation. The BIT is accessible in PDF and HTML formats, catering to diverse user preferences. Those seeking deeper insights can download the BIT Supplementary Document in PDF format from Kantara Initiative's Reports & Recommendations page. This invaluable resource resides under the 'Kantara Initiative Reports' section, clearly labeled as "Supplementary Report to Blinding Identity Taxonomy Report," ensuring straightforward access for all interested parties.

Stay tuned for Part 2 of this three-part series, where we will delve into the crucial aspect of data governance. We will explore how to implement BIT guidelines for protecting sensitive personal information from a data administration vantage point. Our discussion will navigate the governance frameworks and practices that ensure these recommendations are not just theoretical ideals but are effectively integrated into the operational fabric of organizations and distributed data ecosystems, safeguarding privacy at every turn.


OpenID

OpenID Foundation Certification Program Recruiting a Java Developer

The OpenID Foundation is pleased to announce that it is looking to add a Java developer to the successful OpenID certification program team. The OpenID Foundation enables deployments of OpenID specifications to be certified to specific conformance profiles to promote interoperability among implementations. The certification process utilizes self-certification and conformance test suites developed

The OpenID Foundation is pleased to announce that it is looking to add a Java developer to the successful OpenID certification program team. The OpenID Foundation enables deployments of OpenID specifications to be certified to specific conformance profiles to promote interoperability among implementations. The certification process utilizes self-certification and conformance test suites developed by the Foundation.

The Foundation is seeking a consultant (contractor) to join the team on a part- to full-time basis based on availability. This remote team member will provide development, maintenance, and support services to the program that include but are not limited to implementing new tests, addressing conformance suite bugs, and updating existing conformance test suites.

SKILLS:

Strong and documented experience with Java or a similar language Some knowledge of OAuth 2 / OpenID Connect / OpenID for Verifiable Credentials / SIOPv2 / FAPI / JWTs (with an interest in becoming more proficient in these standards) An interest in security & interoperability Experience participating in relevant standards working groups (e.g. IETF OAuth, OpenID Connect, OIDF Digital Credentials Protocols, and/or FAPI) is a bonus Experience with one or more of the OpenID Certification conformance suites is a bonus Experience and comfortable working as a remote team member in a virtual environment


TASKS:

Development tasks include: Developing new test modules Updating existing conformance tests when changes to the specs are approved Extending the conformance tests to work against servers in new ecosystems including to adding additional security / interoperability checks Undertaking more extensive development tasks including developing conformance tests for previously untested specifications Reviewing code changes done by other team members Pushing new versions to production as/when necessary & writing release notes Investigating / fixing reported bugs in the conformance suite Providing guidance to ecosystems that adopt OpenID Foundation specifications Attend OIDF working group calls as/when necessary Attend 1 hour virtual team call every 2 weeks As this is a remote position, attend annual team meeting that is usually adjacent to an industry event


If this opportunity is of interest, please send your resume and cover letter to director@oidf.org with the subject, “OIDF Certification Program Java Developer Opportunity”. Please include in your cover letter how your skills and experience align to the requirements outlined above, your available hours per month, including when you are available to start, and your hourly rate.

The post OpenID Foundation Certification Program Recruiting a Java Developer first appeared on OpenID Foundation.

Thursday, 07. March 2024

FIDO Alliance

Mercari’s Passkey Authentication Speeds Up Sign-in 3.9 Times

Mercari, Inc. is a Japanese e-commerce company, offering marketplace services as well as online and mobile payment solutions. With Mercari users can sell items on the marketplace, and make purchases […]

Mercari, Inc. is a Japanese e-commerce company, offering marketplace services as well as online and mobile payment solutions. With Mercari users can sell items on the marketplace, and make purchases in physical stores. In 2023, they implemented passkeys. This article will explain the motivation behind their decision and the results they achieved.

Motivation

Previously Mercari was using passwords and faced with real-time phishing attacks, added SMS OTPs as an authentication method to protect their users. While this improved their security, it did not completely eliminate real-time phishing attacks. Sending a high volume of SMS OTPs was also both expensive and not very user-friendly.

Mercari also had a new service Mercoin, a platform for buying and selling Bitcoin with the user’s available balance in Mercari, which had strong security requirements and passkeys met their needs.

Because passkeys are bound to a website or app’s identity, they’re safe from phishing attacks. The browser and operating system ensure that a passkey can only be used with the website or app that created them. This frees users from being responsible for signing in to the genuine website or app.

Requiring users to use extra authentication methods and perform additional action is an obstacle when what users actually want is to accomplish something else using the app.

Adding passkey authentication removes that additional step of SMS OTP and improves user experience while also providing better protection for users from real-time phishing attacks and reducing the cost associated with SMS OTPs.

Results

900,000 Mercari accounts have registered passkeys and the success rate of signing in with them is 82.5% compared to 67.7% success rate for signing in with SMS OTP.

Signing in with passkeys has also proved to be 3.9 times faster than signing in with SMS OTP–Mercari users on average take 4.4 seconds to sign in with passkeys, while it takes them 17 seconds to do the same with SMS OTP.

The higher the success rate of authentication and the shorter the authentication time, the better the user experience and Mercari has seen great success with implementing passkeys.

Learn more about Mercari’s implementation of passkeys

To learn more about how Mercari solved the challenges of making a phishing resistant environment with passkeys, read their blog on Mercari’s passkey adoption.

Download Case Study

We Are Open co-op

The Power of Community Knowledge Management

Celebrating Open Education Week 2024 A couple of days ago we ran our fourth Community Conversations session. This one was timed to coincide with Open Education Week, an initiative of OE Global created as “an annual celebration [and] opportunity for actively sharing and learning about the latest achievements in Open Education worldwide”. Our focus was on managing knowledge in communities. Th
Celebrating Open Education Week 2024

A couple of days ago we ran our fourth Community Conversations session. This one was timed to coincide with Open Education Week, an initiative of OE Global created as “an annual celebration [and] opportunity for actively sharing and learning about the latest achievements in Open Education worldwide”.

Our focus was on managing knowledge in communities. The version in the video above is a shortened version of the session, which we recorded without the activities. This blog post contains most of the information in the recording.

What is Knowledge?

Community is key to open education, with an often-overlooked aspect of community management and evolution being how knowledge is stewarded within such networks.

Image by gapingvoid

Let’s start with the above image, showing the difference between terms and concepts that are sometimes used interchangeably, but actually mean different things.

When we talk about community knowledge we’re talking about connecting the dots between information being shared between members. This can turn into insight through a process of reflection, and wisdom by connecting together different insights.

In practice, nothing is ever as simple as the process shown in the above diagram. However, it’s a convenient way to tease apart some of the subtleties.

A Simple, Homely Example

I went on holiday with my family recently. We ‘favourited’ some places on Google Maps as part of our planning, to help us navigate while we were there, and to be able to share what we enjoyed with others afterwards.

Screenshot of Google Maps showing ‘favourited’ and ‘bookmarked’ places in Faro, Portugal

What’s represented on the above screenshot is a bunch of data arranged on a map. When you click on each point, there is further information about each place. If I put these together into an itinerary, this could be considered a form of knowledge.

This is a form of community knowledge management on a very small scale: the community represented by my nuclear family, my extended family and friends, and potentially those people who might in future ask for recommendations on what to do in Faro, Portugal.

Other proprietary tools that might be used to store data and information with others include Trello and Pinterest. You are curating these things as individuals for a particular purpose, but there is not necessarily an effort to connect together the dots in any meaningful way.

Community Knowledge Management

So, what’s the difference between what we’ve discussed so far and managing knowledge within communities?

In this case, we’re specifically talking about Communities of Practice, which we discuss in the first three Community Conversations workshops. Briefly put, they can be defined in the following way:

“Communities of Practice are groups of people who share a concern or a passion for something they do and learn how to do it better as they interact regularly.” (Etienne Wenger)

Harold Jarche has a very clear diagram that he uses regularly in his work around Personal Knowledge Management (PKM) to explore the differences between the spaces in which we interact:

Image via jarche.com/pkm

We’re interested in the middle oval in this diagram, with Communities of Practice (CoPs )overlapping with ‘Work Teams’ and ‘Social Networks’. While we might build knowledge within the walls of our organisations, and share things online with strangers, CoPs are intentional spaces for us to build knowledge between organisations with people we get to know better over time.

Rosie Sherry defines Community Knowledge Management in the following way:

Community Knowledge Management is a process for collaboratively collecting information, insights, stories and perspectives with the goal of supporting a community and the ecosystem with their own learning and growth. (Rosie Sherry)

Although she doesn’t mention it explicitly, the inference is that by “collecting information, insights, stories, and perspectives” the idea is that we not only share knowledge, but we also co-create it.

Tools for Community Knowledge Management

The new version of the Participate platform, to which we are migrating the ORE community, is organised around three types of ‘thing’: Badges, Events, and Docs.

This is useful for keeping communities organised. But what if you’ve got a lot of information — books worth, almost, and you need to organise that? In this case, it’s worth looking at another tool to augment your community’s ‘home’ and which provides some more specialised features.

As you would expect from an organisation entitled We Are Open Co-op, we’re interested in working openly, using openly-licensed resources, open source tools, and cooperating with others. That means we’re going to point towards Open Source software in this section that we know, have used, and trust.

Here are three examples of the types of platforms which can host knowledge created in CoPs:

Wikis — everyone knows Wikipedia, but any organisation or community can have a wiki! You can use the same software, called MediaWiki, or use many other alternatives (we use wiki.js) Forums — these are easily searchable so can be used to capture useful information as part of conversations. We’re big fans of Discourse and have used it for several clients projects. Learning Management Systems (LMS) — can be used to capture information, especially if your community is based around educational resources. Our go-to for this is Moodle.

For the sake of brevity, and to point to our own example, we’re going to show our use of MediaWiki to form Badge Wiki. This has been around for over six years at this point, and serves as a knowledge base for the Open Badges and wider digital credentials community.

Community Knowledge Contribution

There are behaviours around this knowledge repository that overlap with those inside the main community platform. But there are also others, specific to it. For example:

Community Calls specifically focused on discussing and planning elements of Badge Wiki. Barn raisings which focus on co-creation of pages to help establish the knowledge base. Asynchronous discussions to talk about strategy, and catch up between synchronous events such as the previous two. Pro-social behaviours are encouraged and recognised through the use of badges.

To dig into the last of these, we know that there are all kinds of reasons why people contribute to Open Source and community projects. We just want to give them a reason to keep doing so.

Image taken from work WAO did with Greenpeace. See more in this post.

We created a range of badges specifically focused on the community knowledge base. There are attendance badges, for example with the barn raising (and attending multiple times) but also for particular actions such as authoring pages, tidying up existing content, and making it look better!

Images CC BY-ND Visual Thinkery for WAO

Once you’ve got a knowledge base, you can run projects on top of it. So when an ORE community member mentioned that it would be useful to have a ‘toolkit’ for helping people understand Open Recognition… Badge Wiki was the obvious place for it to live!

We launched v0.1 of the Open Recognition Toolkit at the ePIC 2023 in Vienna. As it’s a wiki, this can be easily iterated over time with multiple authors — who can contribute as little or as much as they want.

There’s so much more we could say, but there’s no substitute for practice! Whether you’re planning to start a new community, in the midst of setting one up, or stewarding an existing one, it’s important to think about good practices around Community Knowledge Management.

Being intentional and inclusive about what kind of knowledge is captured and shared within communities is crucial. It’s powerful to pool resources and to help generate insights; it helps to provide impact. It also helps fulfil the needs of different members of the community and helps increase the diversity and richness of who gets involved — and how.

If you would like a thought partner for this kind of work, why not get in touch and have a chat with the friendly people at WAO? The first 30 min call is free of charge, and we’ll do our best to help, or point you towards someone who can!

CC BY-ND Visual Thinkery for WAO

The Power of Community Knowledge Management was originally published in We Are Open Co-op on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wednesday, 06. March 2024

Ceramic Network

Building Points on Ceramic - an Example and Learnings

We built a Web3 points application on Ceramic to explore the design considerations a successful system requires.

We've made the case in a recent blog post that web3 point systems align the incentives of platforms and their users, acting as reputation systems that allow participants to draw inferences between who's creating value and who's likely to receive rewards for their actions. More importantly, these systems help participants understand what user interactions matter to applications using points. And while points often manifest as objects referred to by different names (badges and attestations, for example), there's a commonality across these implementations relevant to their verifiability.

Why Points and Ceramic?

Points data requires properties allowing consumers of points (traditionally the same applications issuing them) to trust their provenance and lineage. This is unsurprisingly why most Web3 points systems today are built on centralized rails - not only is a simple Postgres instance easy to spin up, but the only data corruption vulnerability would result from poor code or security practices.

For readers familiar with Ceramic's composability value proposition, it's likely obvious why we view web3 point systems (and reputation systems more broadly) as ideal Ceramic use cases. Not only does Ceramic offer rich query capabilities, data provenance and verifiability promises, and performance-related guarantees, but both end users and applications benefit from portable activity. We foresee an ecosystem where end users can leverage the identity they've aggregated from one application across many others. In turn, applications can start building on user reputation data from day one.

To put this into practice, we built a scavenger hunt application for EthDenver '24 that allowed participants to collect points based on in-person event attendance.

A Scavenger Hunt for Points

Ceramic was officially involved in 8 or so in-person engagements this year at EthDenver, some of which were cosponsored events (such as Proof of Data and Open Data Day), while others were cross-collaborations between Ceramic and our partners (for example, driving participants to check in at official partner booths at the official EthDenver conference location). The idea was simple - participants would collect points for checking in at these events, and based on different thresholds or interpretations of participant point data (for example, participants with the most event check-ins) would be eligible for prizes.

To make this happen, we ideated on various patterns of data control and schema design that presented the best balance of trade-offs for this use case. In simple terms, we needed to:

Track event attendance by creating or updating abstractions of that activity in Ceramic Provide a crypto-native means for participants to self-identify to leverage Ceramic-native scalar types Secure the application against potential spoofing attempts Collect enough information necessary to perform creative computation on verifiable point data

We were also presented with several considerations. For example, should we go through the effort to emulate a user-centric data control design whereby we implement a pattern that requires additional server-side verification and signed data to allow the end user to control their Ceramic point data? Or what's the right balance of data we should collect to enable interesting interpretations (or PointMaterializations) to be made as a result of computing over points?

Architecting Document Control

Before we jump in, reading our blog post on Data Control Patterns in Decentralized Storage would help provide useful context. As for the problem at hand, two options stand out as the most obvious ways to build a verifiable points system on open data rails:

Reconstruct the approach that would be taken on traditional rails (the application is the author and controller of all points data they generate). This makes the data easy to verify externally based on the Ceramic document controller (which will always be the same), and data consumers wouldn't have to worry about end users attempting to modify stream data in their favor Allow the end users to control their points data on Ceramic. In this environment, we'd need a flow that would be able to validate the existing data had been "approved" by us by verifying a signed payload, then update the data and sign it again before having the user save the update to their Ceramic document, thus ensuring the data is tamper-evident

You might've guessed that the second option is higher-touch. At the same time, a future iteration of this system might want to involve a data marketplace that allows users to sell their points data, requiring users to control their data and its access control conditions. For this reason and many others, we went with #2. We'll discuss how we executed this in the sections below.

What Data Models Did We Use?

When we first started building the scavenger hunt application the SET accountRelation schema option had not yet been released in ComposeDB (important to note due to the high likelihood we would've used it). Keep that in mind as we overview some of the APIs we built to check if existing model instances had been created (later in this article).

In discussing internally how points data manifests, we decided to mirror a flow that looked like trigger -> point issuance -> point materialization. This means that attending an event triggers issuing point data related to that action. In response, that issuance event might materialize as an interpretation of the weight and context of those points (which could be created by both the application that issued the points and any other entity listening in on a user's point activity).

As a result, our ComposeDB schemas ended up like this:

type PointClaims @createModel(accountRelation: LIST, description: "A point claim model") @createIndex(fields: [{ path: ["issuer"] }]) { holder: DID! @documentAccount issuer: DID! @accountReference issuer_verification: String! @string(maxLength: 100000) data: [Data!]! @list(maxLength: 100000) } type Data { value: Int! timestamp: DateTime! context: String @string(maxLength: 1000000) refId: StreamID } type PointMaterializations @createModel( accountRelation: LIST description: "A point materialization model" ) @createIndex(fields: [{ path: ["recipient"] }]) { issuer: DID! @documentAccount recipient: DID! @accountReference context: String @string(maxLength: 1000000) value: Int! pointClaimsId: StreamID! @documentReference(model: "PointClaims") pointClaim: PointClaims! @relationDocument(property: "pointClaimsId") }

To provide more context, we built the application to create a new PointClaims instance if one did not already exist for that user, and update the existing PointClaims instance if one already existed (and, in doing so, append an instance of Data to the "data" field). I mentioned above that the SET accountRelation option would've likely come in handy. Since we were hoping to maintain a unique list of PointClaims that only had 1 instance for each user (where the issuer represents the DID of our application), SET would've likely been the preferred way to go to make our lives easier.

You'll also notice that an optional field called "refId" that takes in a StreamID value exists in the Data embedded type. The idea here was that issuing points might be in response to the creation of a Ceramic document, in which case we might want to store a reference pointer to that document. For our scavenger hunt example, this was the case - points were issued in recognition of event attendance represented as individual Ceramic documents:

type EthDenverAttendance @createModel( accountRelation: LIST description: "An attendance claim at an EthDenver event" ) @createIndex(fields: [{ path: ["recipient"] }]) @createIndex(fields: [{ path: ["event"] }]) @createIndex(fields: [{ path: ["latitude"] }]) @createIndex(fields: [{ path: ["longitude"] }]) @createIndex(fields: [{ path: ["timestamp"] }]) @createIndex(fields: [{ path: ["issuer"] }]) { controller: DID! @documentAccount issuer: DID! @accountReference recipient: String! @string(minLength: 42, maxLength: 42) event: String! @string(maxLength: 100) latitude: Float longitude: Float timestamp: DateTime! jwt: String! @string(maxLength: 100000) }

Finally, take a look at the "issuer_verification" field in PointClaims and "jwt" field in EthDenverAttendance. Both fields were allocated to store the data our application verified + signed, represented as a base64-encoded string of a JSON web signature. For PointClaims, this entailed just the values within the "data" array (involving a verification, updating, and resigning process each time new point data needed to be appended).

Issuing Points - Data Flow

For the remainder of the article, feel free to follow along in the following public code:

https://github.com/ceramicstudio/fluence-demo

You'll notice two environment variables (SECRET_KEY and STRING) scoped only for server-side access, the first of which is meant to contain our secret 64-character seed from which we'll instantiate our application's DID (to be used for filtering PointClaims instances for documents where our application's DID is the issuer, as well as for verifying and signing our tamper-evident fields). To explain STRING, it might be helpful at this point if I dive a bit deeper into what we built to support the user flow.

Private PostgreSQL Instance (for Whitelisted Codes)

You'll notice that a findEvent method is called first in the useEffect lifecycle hook within the main component rendered on our post-login screen, which subsequently calls a /api/find route (which uses our STRING environment variable to connect to our PostgreSQL client). For this application, we needed to quickly build a pattern where we were able to both issue and verify codes corresponding to each in-person event that had been generated beforehand. This ties back to our planned in-person flow:

Participant scans a QR code or taps an NFC disc that contains the URL of our application + a parameterized whitelisted code that hasn't yet been used The application checks the database to ensure the code hasn't yet been used

While in theory this part could've been built on Ceramic with an added layer of encryption, it was easier to stand this up quickly with a private Postgres instance.

Determining Participant Eligibility

If the call to /api/find determines that the code has not been used, findEvent then calls a createEligibility method, passing in the name of the event as the input variable. Notice that the first thing we do is call a getDID method, which calls a /api/checkdid server route that uses our SECRET_KEY variable to instantiate a DID and send us back the did:key identifier.

This is the second check our application performs to prevent cheating, whereby we query ComposeDB for EthDenverAttendance instances, filtering for documents where the signed-in user is the controller, where the event is the string passed into createEligibility, and where our application is the issuer (as evidenced by the DID).

Finally, if no matching document exists, we determine that the participant is eligible to create a badge.

Generating Points Data

While there's plenty to discuss related to generating and validating badge data, given that the pattern is quite similar when issuing points, I'll focus on that flow. The important thing to know here is that within both our createBadge and createFinal methods found in the same component mentioned above call an issuePoint method if a badge was successfully created by the user, passing in the corresponding value, context, and name of the event corresponding to that issuance.

What happens next is a result of our decision to allow the end user to control their points-related data, such that we:

Call an API route to access our application's DID Call yet another /api/issue route, where we Query PointClaims to see if one already exists or not for the end user where our application is also the issuer const authenticateDID = async (seed: string) => { const key = fromString(seed, "base16"); const provider = new Ed25519Provider(key); const staticDid = new DID({ resolver: KeyResolver.getResolver(), provider }); await staticDid.authenticate(); ceramic.did = staticDid; return staticDid; } // we'll use this both for our query's filter and for signing/verifying data const did = await authenticateDID(SECRET_KEY); const exists = await composeClient.executeQuery<{ node: { pointClaimsList: { edges: { node: { id: string; data: { value: number; refId: string; timestamp: string; context: string; }[]; issuer: { id: string; }; holder: { id: string; }; issuer_verification: string; }; }[]; }; } | null; }>(` query CheckPointClaims { node(id: "${`did:pkh:eip155:${chainId}:${address.toLowerCase()}`}") { ... on CeramicAccount { pointClaimsList(filters: { where: { issuer: { equalTo: "${did.id}" } } }, first: 1) { edges { node { id data { value refId timestamp context } issuer { id } holder { id } issuer_verification } } } } } } `); Use the data passed into the API's request body to sign and encode the values with our application's DID (if no PointClaims instance exists) Decode and verify the existing values of "issuer_verification" against our application's DID before appending the new data, resigning, and re-encoding it with our application's DID (if a PointClaims instance does exist) if (!exists?.data?.node?.pointClaimsList?.edges.length) { const dataToAppend = [{ value: parseInt(value), timestamp: new Date().toISOString(), context: context, refId: refId ?? undefined, }]; if (!refId) { delete dataToAppend[0]?.refId; } const jws = await did.createJWS(dataToAppend); const jwsJsonStr = JSON.stringify(jws); const jwsJsonB64 = Buffer.from(jwsJsonStr).toString("base64"); const completePoint = { dataToAppend, issuer_verification: jwsJsonB64, streamId: "", }; return res.json({ completePoint }); } else { const dataToVerify = exists?.data?.node?.pointClaimsList?.edges[0]?.node?.issuer_verification; const json = Buffer.from(dataToVerify!, "base64").toString(); const parsed = JSON.parse(json) as DagJWS; const newDid = new DID({ resolver: KeyResolver.getResolver() }); const result = parsed.payload ? await newDid.verifyJWS(parsed) : undefined; const didFromJwt = result?.payload ? result?.didResolutionResult.didDocument?.id : undefined; if (didFromJwt === did.id) { const existingData = result?.payload; const dataToAppend = [{ value: parseInt(value), timestamp: new Date().toISOString(), context: context, refId: refId ?? undefined, }]; if (!refId) { delete dataToAppend[0]?.refId; } existingData?.forEach((data: { value: number; timestamp: string; context: string; refId: string; }) => { dataToAppend.push({ value: data.value, timestamp: data.timestamp, context: data.context, refId: data.refId, }); }); const jws = await did.createJWS(dataToAppend); const jwsJsonStr = JSON.stringify(jws); const jwsJsonB64 = Buffer.from(jwsJsonStr).toString("base64"); const completePoint = { dataToAppend, issuer_verification: jwsJsonB64, streamId: exists?.data?.node?.pointClaimsList?.edges[0]?.node?.id, }; return res.json({ completePoint }); } else { return res.json({ err: "Invalid issuer", }); } } Send the result back client-side Use our client-side ComposeDB context (on which our end user is already authenticated) to either create or update a PointClaims instance, using the results of our API call as inputs to our mutation //if the instance doesn't exist yet if (finalPoint.completePoint.dataToAppend.length === 1) { data = await compose.executeQuery(` mutation { createPointClaims(input: { content: { issuer: "${did}" data: ${JSON.stringify(finalPoint.completePoint.dataToAppend).replace(/"([^"]+)":/g, '$1:')} issuer_verification: "${finalPoint.completePoint.issuer_verification}" } }) { document { id holder { id } issuer { id } issuer_verification data { value refId timestamp context } } } } `); }

Does this sound a bit tedious? This is the same pattern we're using for issuing and verifying badges as well. And yes, it is verbose compared to what our code would've looked like had we decided not to go through the trouble of allowing our participants to control their Ceramic data.

Creating Manifestations

As mentioned above, PointMaterializations represent how points manifest in a platform for reward structures (like a new badge, an aggregation for a leaderboard, or gating an airdrop). Most importantly, the PointMaterializations collection is a new dataset built from our composable piece PointClaims.

To create PointMaterializations, we use an event-driven architecture, leveraging our MVP EventStream feature. When PointClaims instances are written to Ceramic, we will receive a notification in another application, in this case, a Fluence compute function.

Our compute function works like this

Determine that the notification is for the model (PointClaims) and the issuer is the DID of our application. Extract from the notification content the PointClaims Verify that the issuer_verification is valid for the data field in PointClaims If the subject of the PointClaims (the document owner) has an existing PointMaterializations, retrieve it, otherwise create a new one. For the context of the PointMaterializations calculate a new value unique-events : tally all the context unique entries in the data field all-events : tally all the entries in the data field first-all-events : similar to all events, we check all unique context entries in the data field. If they have attended all the events, we then record the latest first event check-in as the value, so that we can rank users by that time

If you want to view the Rust code that implements the sequence above, please check out the compute repository.

At the time of writing, the EventStream MVP does not include checkpointing or reusability, so we have set up a checkpointing server to save our state and then use a Fluence cron job, or spell, to periodically run our compute function. In the future, we hope to trigger Fluence compute functions from new events on the EventStream.

What We Learned

This exercise left our team with a multitude of valuable learnings, some of which were more surprising than others:

Wallet Safety and Aversion to Wallet Authentication

We optimized much of the flow and the UI for mobile devices, given that the expected flow required scanning a code/tapping a disc as the entry point to interact with the application. However, throughout EthDenver and the various events we tried to facilitate issuing points, we overwhelmingly noticed a combination of:

Participants intentionally do not have a MetaMask/wallet app installed on their phones (for safety reasons) If a participant has such a wallet app on their phone, they are VERY averse to connecting it to our scavenger hunt application (particularly if they haven't heard of Ceramic previously)

This presents several problems. First, given that our flow required a scanning/tapping action from the user, this almost entirely rules out using anything other than a phone or tablet. In a busy conference setting, it's unreasonable to expect the user to pull out their laptop, hence why those devices were not prioritized in our design.

Second, the end user must connect their wallet to sign an authentication message from Ceramic to write data to the network (thus aligning with our user-centric data design). There's no other way around this.

Finally, our scavenger hunt application stood ironically in contrast with the dozens of POAP NFC stands scattered throughout the conference (which did not require end users to connect their wallets, and instead allowed them to input their ENS or ETH addresses to receive POAPs). We could've quite easily architected our application to do the same, though we'd sacrifice our user-centric data design.

SET Account Relation will be Useful in Future Iterations

As explained above, the PointsClaims model presents an ideal opportunity to use the SET accountRelation configuration in ComposeDB (given how we update an existing model if it exists).

Data Verifiability in User-Centric Data Design Entails More Work

Not a huge shocker here, and this point is certainly relevant for other teams building with Verifiable Credentials or EAS Off-Chain Attestations on Ceramic. While there are plenty of considerations to go around, we figured that our simple use of an encoded JWT was sufficient enough for our need to validate both the originating DID and the payload. It was hard to imagine how we would benefit from the additional baggage relevant to saving point-related VCs to ComposeDB.

Interested in Building Points on Ceramic?

If your team is looking for jam on some points, or you have ideas for how we can improve this implementation, feel free to contact me directly at mzk@3box.io, or start a conversation on the Ceramic Forum. We look forward to hearing from you!


Identity At The Center - Podcast

We have another Sponsor Spotlight episode of the Identity at

We have another Sponsor Spotlight episode of the Identity at the Center podcast for you this week. We were joined by Rich Dandliker, Chief Strategist at Veza. We had an insightful discussion about Veza's unique approach to identity security, their 'anti-convergence' strategy, the significance of a reputable customer base, and the importance of a data-first approach to identity management. Don't

We have another Sponsor Spotlight episode of the Identity at the Center podcast for you this week. We were joined by Rich Dandliker, Chief Strategist at Veza.

We had an insightful discussion about Veza's unique approach to identity security, their 'anti-convergence' strategy, the significance of a reputable customer base, and the importance of a data-first approach to identity management.

Don't miss out on this episode for a comprehensive understanding of Veza's innovative solutions in the IAM market. You can listen to the episode at idacpodcast.com or in your favorite podcast app.

#iam #podcast #idac


Next Level Supply Chain Podcast with GS1

Risk, Resilience, and AI in the Supply Chain with Yossi Sheffi

The COVID-19 pandemic threatened to derail supply chain management completely. Or did it? Yossi Sheffi, distinguished MIT professor and an expert with 49 years in supply chain management, breaks down supply chain resilience into five levels and argues that supply chain managers were unsung heroes during the pandemic. Yossi also touches on balancing resilience with sustainability, pointing out th

The COVID-19 pandemic threatened to derail supply chain management completely. Or did it?

Yossi Sheffi, distinguished MIT professor and an expert with 49 years in supply chain management, breaks down supply chain resilience into five levels and argues that supply chain managers were unsung heroes during the pandemic. Yossi also touches on balancing resilience with sustainability, pointing out that while essential, both can introduce short-term costs and competitive imbalances. He underscores the delicate balance companies must strike between cost management and maintaining multiple suppliers for risk mitigation.

He expounds on the role of AI in supply chains, emphasizing the importance of leveraging artificial intelligence for identifying alternative suppliers and predictive analysis. The conversation also delves into the roles of machine learning, large language models, and robotics in evolving supply chains. Despite skepticism about fully autonomous applications like pilotless planes, Yossi highlights ongoing experiments with AI as potential co-pilots. The episode concludes with reflections on the rapid technological evolution impacting the professional landscape and the fabric of daily life.

 

Key takeaways: 

Resilience in supply chains is crucial for navigating disruptions and maintaining operational continuity.

Artificial intelligence (AI) technology is vital for supply chain management despite potential challenges.

Supply chain resilience and sustainability are critical concerns, as are the investments in these areas.

 

Connect with GS1 US:

Our website - www.gs1us.org

GS1 US on LinkedIn

 

Connect with guest:

Yossi Sheffi on LinkedIn

Check out Yossi’s book - The Magic Conveyor Belt: Supply Chains, A.I., and the Future of Work

 


FIDO Alliance

Tech Game World: Passkeys are arriving on PlayStation: how the smart alternative to the password works

The advantages are many. Let’s start by saying that Passkeys are more secure than traditional passwords. These are fraud resistant and follow standards Fast Identity Online (FIDO )established by the […]

The advantages are many. Let’s start by saying that Passkeys are more secure than traditional passwords. These are fraud resistant and follow standards Fast Identity Online (FIDO )established by the FIDO Alliance, a global organization of which the Sony Group (owner of PlayStation) is part. The FIDO Alliance is responsible for defining and promoting the more advanced authentication standards for a wide range of devices and platforms. The goal is to reduce the dependence on passwords, which are considered an obsolete method in contemporary times. These standards are supported by leading companies and institutions in the technology sector, with which PlayStation itself has collaborated to offer an optimal access experience.


PCMag: No More Passwords: Sony Adopts Passkeys for PlayStation 4, PS5

Sony has introduced passkey support for PlayStation, eliminating the need for traditional passwords. Users can now opt for a more secure and convenient sign-in method by setting up a passkey […]

Sony has introduced passkey support for PlayStation, eliminating the need for traditional passwords. Users can now opt for a more secure and convenient sign-in method by setting up a passkey stored on their phone or laptop. Passkeys use unique cryptographic keys that remain on the device which are phishing resistant, and can be accessed through other devices in case of loss.

Tuesday, 05. March 2024

Hyperledger Foundation

Hyperledger Collaborative Learning Spotlight: BiniBFT - An Optimized BFT on Fabric

WHAT WE WORKED ON:

WHAT WE WORKED ON:

Monday, 04. March 2024

Project VRM

On Customer Constituency

I’m in a discussion of business constituencies. On the list (sourced from the writings of Doug Shapiro) are investors, employees, suppliers, customers, and regulators. The first three are aware of their membership, but the last two? Not so sure. Since ProjectVRM works for customers, let’s spin the question around. Do customers have a business constituency? […]

A customer looks at a market where choice rules and nobody owns anybody. Source: Microsoft Copilot | Designer

I’m in a discussion of business constituencies. On the list (sourced from the writings of Doug Shapiro) are investors, employees, suppliers, customers, and regulators.

The first three are aware of their membership, but the last two? Not so sure.

Since ProjectVRM works for customers, let’s spin the question around. Do customers have a business constituency? If so, businesses are members by the customer’s grace. She can favor, ignore, or more deeply engage with any of those businesses at her pleasure. She does not “belong” to any of them, even though any or all of them may refer to her, or their many other customers, with possessive pronouns.

Take membership (e.g. Costco, Sam’s Club) and loyalty (CVS, Kroger) programs off the table. Membership systems are private markets, and loyalty programs are misnomered. (For more about that, read the “Dysloyalty” chapter of The Intention Economy.)

Let’s look instead at businesses that customers engage as a matter of course: contractors, medical doctors, auto mechanics, retail stores, restaurants, clubs, farmers’ markets, whatever. Some may be on speed dial, but most are not. What matters in all cases is that these businesses are responsible to their customers. “The real and effectual discipline which is exercised over a workman is that of his customers,” Adam Smith writes. “It is the fear of losing their employment which restrains his frauds and corrects his negligence.” That’s what it means to be a customer’s constituent.

An early promise of the Internet was supporting that “effectual discipline.” For the most part, that hasn’t happened. The “one clue” in The Cluetrain Manifesto said “we are not seats or eyeballs or end users or consumers. we are human beings and our reach exceeds your grasp. deal with it.” Thanks to ubiquitous surveillance and capture by corporate giants and unavoidable platforms, corporate grasp far outreaches customer agency.

That’s one reason ProjectVRM has been working against corporate grasp since 2006, and just as long for customer reach. Our case from the start has been that customer independence and agency are good for business. We just need to prove it.


Oasis Open Projects

OASIS Board Member Spotlight Series: Q&A with Jautau “Jay” White, Ph.D.

Meet Jautau “Jay” White, Ph.D., an accomplished leader with a strong focus on people and teamwork. With two decades of experience, he specializes in building top-notch teams and programs that enhance information security and cybersecurity while reducing risks and ensuring compliance. His expertise spans AI/ML vulnerabilities, supply chain security, data privacy, cybersecurity, and more. What […]

The OASIS Board of Directors are integral to the organization's success. Read our Q&A to gain a better sense of who they are and why they serve the OASIS community.

Meet Jautau “Jay” White, Ph.D., an accomplished leader with a strong focus on people and teamwork. With two decades of experience, he specializes in building top-notch teams and programs that enhance information security and cybersecurity while reducing risks and ensuring compliance. His expertise spans AI/ML vulnerabilities, supply chain security, data privacy, cybersecurity, and more.

What can you tell us about your current role?
At Microsoft, my role involves supply chain security and open source strategy work. My main function is to be the subject matter expert on cybersecurity and information security matters, and take that knowledge and use it to communicate internally to extrapolate ideas, initiatives, and strategies that can be worked on in a collaborative environment such as open source. 

A large part of my job is going out into the open source ecosystem to see what communities are already in place and to help build communities around work that’s for the betterment of mankind. I seek out opportunities that align with Microsoft’s ongoing projects, identifying areas where Microsoft wants to invest its efforts and finding where those efforts are already underway. We initiate projects within Microsoft and leverage open source collaboration to crowdsource innovative solutions from open source communities. I bring those insights back to Microsoft, advocating for the adoption of these solutions, saying “This is already being done, why don’t we use this?” or “Why don’t we get involved with that?” That’s a large part of my job. I love what I do mainly because it takes everything I’ve learned throughout my entire career to do it.

What inspired you to join the OASIS Board?
I love standards, specs, and policies. Having had a hand in writing standards and then using them throughout my entire career, joining the OASIS Board was an excellent opportunity. One of the things I think that I liked most was the fact that I had to run for the board seat. I campaigned and talked to community members and staff; I really put myself out there and I enjoyed that immensely.

I love what OASIS does in terms of the international community. I love its recognition. There are so many specs and technologies that are being used today that people don’t even know originated in OASIS and I just love that I get a chance to be part of it.

Prior to serving on the OASIS Board, were you involved in open source or open standards? 
For the past few years, I’ve been involved with the Linux Foundation, especially their Open Source Security Foundation (OpenSSF) project. I currently sit on OpenSSF’s Technical Advisory Council (TAC) and I lead a few working groups and special interest groups there as well. Getting involved with OASIS was the next evolution. OASIS does such an amazing job bringing standards and specs to market. I’ve always felt that I want to be involved in this part, because the regulatory part is where I thrive.

What skills and expertise do you bring to the OASIS Board and how do you hope to make an impact?
I bring extensive cyber and security knowledge. Unlike many individuals who specialize in one area for the entirety of their careers, I’ve navigated through many roles inside of cyber and information systems. I’ve been a network engineer, a systems admin, a desktop support engineer, and a penetration tester. Also, I’ve done physical security assessments, admin security assessments, and I’ve installed firewalls. I have a software engineering degree, so I’ve written programs. There are so many different places that I’ve touched throughout my entire career across government, healthcare, finance, and professional services sectors. My experiences have enabled me to approach situations from different vantage points and engage meaningfully in conversations. I’m excited to learn about emerging standards and specs from diverse industries.

Why are you passionate about the OASIS mission to advance technology through global collaboration?
Global collaboration is key. I spent my last few years working in open source, and it’s so important to work collaboratively. I coined the phrase, “strategically competing through partnership and collaboration.” A lot of these major companies are competitors in nature, but there’s so much out there right now that is affecting every single one of our businesses at the same time, that we have to come together to build these standards, technologies, controls, and safeguards so that our joint customer base remains safe. Trust is huge and our customers have to trust each and every one of us equally.

What sets OASIS apart from other organizations that you’ve worked with in the past? 
The way OASIS is constructed around Technical Committees and Open Projects is still relatively new to me. I think where OASIS shines is how standards get created and brought to market. That’s the niche.

What would you say to companies that want to bring their projects to OASIS?
It would totally be dependent on what that company wanted. If they want to create a spec or a standard around a tool that’s being created, I would definitely say go to OASIS.

Do you have an impact story about your work in open source or open standards?
I take great pride in establishing a Diversity Equity and Inclusion (DEI) working group in the OpenSSF where there wasn’t one before. Additionally, I’m proud of the AI work that I’ve been able to bring to Microsoft.

At OASIS, I’m excited to be one of the founding members of the OpenEoX Technical Committee alongside Omar Santos. I’m extremely excited about OpenEoX’s potential; I think it’s going to be huge in the industry because there isn’t a standard for end-of-life and end-of-support. There’s nothing out there that allows customers to understand when new releases are coming in, when they’re going out, and how things are deprecated. Having been a part of OpenEoX since its inception and participating in the initial meetings thus far has been incredibly fulfilling.

Can you tell me about any exciting changes or trends in open source and standards?
The AI space is extremely large and there’s so much room to play in it. I don’t want us to get consumed by one area over the other. There are so many different specs and standards that can be created and I want us to be open to all the possibilities and open to the entire knowledge space.

Where do you see standards going in the future?
I see standards becoming more prevalent with respect to these different government regulations coming in. We have more and more regulatory requirements coming out that are beginning to drive standards, for example the EO from the White House, the EU’s Cyber Resilience Act (CRA), and a policy that’s coming out in Germany. I can see that gap closing where you’ll have a standard that could even drive a regulatory requirement at some point which will be something weird to see.

What’s a fun fact about you?
I ride motorcycles and I like to work on cars and bikes. More than anything, I enjoy getting under the hood of a car or lifting the bike up and taking it apart and putting it back together.

The post OASIS Board Member Spotlight Series: Q&A with Jautau “Jay” White, Ph.D. appeared first on OASIS Open.


Identity At The Center - Podcast

It’s another brand-new episode of the Identity at the Center

It’s another brand-new episode of the Identity at the Center Podcast! This week, we had the pleasure of speaking with Laura Gomez-Martin from RSM. We dove into the role of government in protecting privacy, the complexity of privacy policies, and the balance between public and company expectations. Laura shared her unique insights on these topics and much more. You can listen to the episode at idac

It’s another brand-new episode of the Identity at the Center Podcast! This week, we had the pleasure of speaking with Laura Gomez-Martin from RSM. We dove into the role of government in protecting privacy, the complexity of privacy policies, and the balance between public and company expectations. Laura shared her unique insights on these topics and much more. You can listen to the episode at idacpodcast.com or in your favorite podcast app.

#iam #podcast #idac

Friday, 01. March 2024

DIF Blog

Guest blog: David Birch

David Birch is a keynote speaker, a published author on digital money and digital identity, fintech columnist, social media commentator and an international advisor on digital financial services. A recognised thought-leader in digital identity and digital money, he was named one of the global top 15 favorite sources of business

David Birch is a keynote speaker, a published author on digital money and digital identity, fintech columnist, social media commentator and an international advisor on digital financial services. A recognised thought-leader in digital identity and digital money, he was named one of the global top 15 favorite sources of business information by Wired magazine. 

What was the career path that led to you becoming a thought leader on digital money and identity? 

It’s quite straightforward. I started out in the world of secure electronic communications. I was working in primarily in defence and safety-critical communications when suddenly the financial sector needed to know about this stuff too, which took me into the world of finance and payments. 

I’d edited a book about digital identity and then I was encouraged by the economist Dame Diane Coyle to write Identity Is The New Money. She was the person who pushed  me to have a go at writing a book myself. The timing was good, as others were talking about similar ideas. 

The book helped people to rethink some of the problems, and I think it’s stood the test of time. 

It’s been ten years since you published Identity Is The New Money. Is this a reality today? 

On one level it is. Some time ago I started to realize that the big problems in payments were identity problems, not payment problems. It doesn't matter if it's push payment fraud or whatever, the problems all come from the lack of identity infrastructure. Why is Elon Musk spending so much on money transmitter licenses, KYC (Know Your Customer) and AML (Anti Money Laundering)? Is it because he wants to earn a tiny slice when you pay a friend for concert tickets, or is it because he wants to know who you are and what you’re into? The data about a payment is much more valuable than the transaction fee. 

But in the sense in which I meant ‘identity is the new money’, it still isn't, and that’s surprising.

What needs to change? 

The lack of privacy is one area. Digital payments are too intrusive, though a lot of people don't care. I get a lot of messages about how terrible it would be for CBDCs (Central Bank Digital Currencies) to include not-yet existent features such as the ability to block your spending on certain things, yet when it’s Visa or Twitter being able to see everything you buy, no-one seems bothered. 

Authentication is another area. It bugs me that 10 years later I'm still doing password resets. Recently I needed to book a hotel room, so I tried logging into a brand where I've got points. I got the password wrong and didn’t want to wait for the reset email. Instead I logged into a different brand and booked a room. My product choice was based on which password I remembered!  

Do you see a role for decentralized identity in fixing these issues? 

I like the underlying standards, Decentralized Identifiers and Verifiable Credentials. But the implementation isn’t there yet. From the history of Bitcoin we can see that people lack the competence to manage their keys. When I drop my phone in the canal, how do I get all my stuff back? In a centralized world it’s easy. I buy a new phone and Apple puts all the pictures back. I’ve got my 2FA (2-Factor Authentication) device, so I can easily log into my bank again. 

Otherwise, I'd have to put my secret key on a memory stick and bury it in a biscuit tin in the back garden. For 99 per cent of the population that will never work. 

How can we overcome these challenges? 

I believe the answer is custodial SSI (Self Sovereign Identity), whereby I generate the key on my app and my bank looks after it. That looks like a viable option to me, because banks know how to securely manage keys in distributed hardware, so I trust them not to lose the key. If they do, there’s recourse, as they’re regulated. 

Do I want to control my digital identity? Absolutely. Do I want to hold the keys? No, I want my bank to do it for me. 

What makes you believe people will trust their bank with the keys to their digital identity? 

There’s trust in the legal sense, and then there’s trust in the everyday sense: I trust that my personal data won’t be looted, that I won’t lose access if I lose my phone… I trust the system to regulate my bank and ensure they don’t do stupid things. In the mass market, that’s the kind of trust that matters — the belief that if something goes wrong, it will get fixed. 

What does a good digital identity experience look like, in your view? 

When I log in to the airline, it should ask “Which ID would you like to use?” If I want to use my Avios app, I should be able to. It might call my EU wallet in the background, but I don't see that, everything is embedded. Personally I'd like to never think about managing my identity again.

In June 2023 you stated that the lack of mass-market digital identity is a drag on the economy. Have you seen much progress since then? 

Lots of companies are experimenting. But is anything mainstream happening? We’re not there yet. For example, I can’t download my Barclays ID and use it to log into my HSBC account.  

We’re starting to see people storing mDLs (mobile driving licenses) in their Apple wallet, and the EU Digital Identity Wallet is on the horizon. Whether it gets traction or not, it’s driving forward research and development. Does that mean the EU wallet will be all-conquering? I don't know. 

You’ve talked about how machine customers or ‘custobots’ will revolutionize ecommerce. Can you expand on this a bit please? 

I think there’s a good chance this will happen, starting with financial services. A bot can probably do a better job of managing my finances than I can. On April 6 (the start of the UK tax year) I’ll be looking at what are the best ISAs (Individual Savings Accounts). I will spend hours faffing about, researching, applying, waving my phone in front of my face to prove it’s me, figuring out which account to transfer money from… It’s the kind of thing that could be done in nanoseconds by AI. 

I might choose the Martin Lewis bot or the Waitrose bot to do this for me. The idea that they could be regulated by the FCA (Financial Conduct Authority) and operate under relevant duty of care legislation, with the co-ordinated goal of delivering my financial health, is appealing. 

I’ve also proposed that companies will soon need to provide APIs to support the needs of custobots rather than people.

Where is digital identity headed, in your view? 

There’s energy arriving into the space from two unexpected quarters. One is CBDCs. There’s a need for identity coming from that side, and pressure to get it fixed. The other area is the metaverse. People looked at the lack of obvious progress following Meta’s early pronouncements and thought, it’s not going anywhere. That’s the wrong lesson to take away. For example Apple Vision Pro (Apple’s extended reality headset) is out and there will be no shortage of people wanting to buy it. 

Digital identity is fundamental to make the metaverse a safe space. Assuming this is done right and the metaverse has an identity layer built in from the start, it could become a safer, less expensive, and therefore more desirable, place to transact than the “real world”. 

Money in the Metaverse: Digital Assets, Online Identities, Spatial Computing and Why Virtual Worlds Mean Real Business will be published in late April 2024. To pre-order, click here.



Thursday, 29. February 2024

EdgeSecure

Empowering Campus Networks

The post Empowering Campus Networks appeared first on NJEdge Inc.

Beginning his career as a student assistant in technology services, Michel Davidoff was responsible for pulling thin and thick ethernet at Sonoma State University. Upon leaving the University ten years later, Davidoff was in the role of Director of Networking and Telecommunication. “I was responsible for the campus network that we had grown from around three hundred devices to well over 10,000,” says Davidoff. “I left Sonoma State in 2002 and set my sights on California State University (CSU) Chancellors’ Office. They had started a technology strategy program that included at its core digital equity. The Trustees of CSU were looking to implement the Information Technololgy Strategy (ITS) that was represented as the pyramid. IT infrastructure was at the base of the pyramid, the center, called middle world, included initiatives and projects including security and identity management, and the top was student success. CSU’s visionaries, including leadership and trustees, understood very early on the importance of technology to enable education. I was invited to participate in a system wide initiative to determine the standards for the bottom of the pyramid. Following this meeting, I was eager to help further advance this initiative and joined CSU in a new and exciting role.”

Finding Cost-Effective Solutions
Tasked with helping create a consortium of 23 directors of networking at CSU, Davidoff began building working groups of experts. “Along with being a process facilitator for the consortium, I also created guidelines for writing RFPs, particularly outlining the functional requirements,” explains Davidoff. “A large part of my job at CSU was to provide accurate financial projections, both internal and external, and maintain connectivity for all 23 campuses. In 2006 as wireless technology became more prominent, I was tasked with integrating wireless into each campus. Without more money in the budget to do so, I had to get creative.”

“We began by creating a strategy for network rightsizing,” continues Davidoff. “Since I had the data of every single closet at CSU, I knew that more than 50 percent of the switches were not being used, but because of the fear of not having enough as they grow, the network had been built significantly bigger than necessary. I developed a process that if a port has not been used in 90 days, it will not be refreshed. That freed about 50 percent of the budget delegated for switching and routing. We were able to deploy wireless techology on the campuses, and through a RFP, develop the functional requirements. Later when we needed to enhance and standardize security, we went through a similar process and selected a firewall vendor, and became much more systematic and methodical about the deployment process.”

Spanning over two decades, Davidoff’s career at CSU allowed him to become well versed in delivering scalable, ground-breaking strategies and solutions for large scale infrastructure deployments. “I am proud of our accomplishments at CSU, and I believe I was able to bring a few key things to the University,” shares Davidoff. “First, is that collaboration works, even if it might be a little slower. Working together and developing RFPs can offer tremendous cost savings, in fact, at the end of my career, most of the equipment we purchased was at discounts greater than 80 percent.  My job was to create the most efficient administrative process that requires the least amount of money, while providing an exceptional student learning experience.”

“We need to eliminate complexity to enable innovation. If we keep this complexity in our environment, every time someone wants to innovate, we need to change all these features and configurations. Many vendors or wireless controllers have hundreds of different features and it’s difficult to develop best practices. We need a deterministic network and not a non-deterministic network in order to predict the performance.”

— Michel Davidoff
Chief Strategist of Education, Nile

Encouraging Collaboration
Over the years, Davidoff was responsible for security, log management, and later in 2014, developing the cloud strategy for CSU. “For the next three to four years, I developed the CSU cloud strategy and I believe the biggest selling point for leadership was, unlike networking, the Cloud was a new technology,” explains Davidoff. “Instead of several experts from Amazon Cloud and several Azure experts at CSU, I suggested creating a small team that focused on cloud technology and how to make it the most efficient and automated. Throughout my career, I’ve seen the value of collaboration, especially when making important decisions on how the campus is going to run and ensuring systems are as efficient as possible. From a long-term strategic standpoint, I am a believer in the wisdom of the group, rather than the wisdom of the individual. If everyone feels they have a voice, a successful outcome is more likely. This approach was aligned with my approach that we don’t give a campus a budget, we give them a solution.”

A day after Davidoff was set to retire in March 2020, CSU shut down its physical campuses due to the pandemic. “Leadership knew CSU must prepare for remote learning and I began doing a lot of research, along with forming a working group,” explains Davidoff. “We selected a small company to help us teach online in case we would need to offer remote classes. Part of the contract included free licenses for half a million students for up to ten years, as well as licenses for every faculty and staff member. We trained everyone on the software and ensured we could operate online. When the pandemic hit, CSU was the first university system in the U.S. without any downtime because our processes and strategies were ready to go.”

Bringing Insights to a New Role
After retiring, Davidoff began thinking about where he could have an even larger impact on education and helping students. “It became clear that technology companies, especially in the networking domain, are a place where I could make my mark in creating efficient technology solutions,” shares Davidoff. “I learned of a new company, Nile, and I wanted to bring my knowledge and unique perspective of higher education infrastructure and my vast experience in over two hundred network deployments. I knew I could share how the customer thinks because I had been the customer for thirty years.”

“The Edge team works hard to stay at the forefront of innovation in the marketplace. In the world of enterprise networking, Nile represents an entirely new approach that enables organizations to offload the often overwhelming complexities of network management while reaping the benefits of a financially predictable, highly adaptable, and supremely secure network environment. We’re proud to have Nile as a new awardee and partner in our fast-growing EdgeMarket co-op.”

— Dan Miller
Associate Vice President, EdgeMarket

Joining Nile as the Chief Strategist of Education early last year, Davidoff aligns the company’s strategy with an educational lens to ensure all technology and services deliver a superior connectivity experience. “I love the thought leadership part of my role at Nile and writing papers about rethinking networking and higher education,” says Davidoff. “I talk to a lot of students and gather valuable insights about today’s learning expectations. Nile modernizes IT operations through the delivery of a new wired and wireless enterprise network and as a leader in enterprise Network as a Service (NaaS), it allows institutions to stabilize their budgets. From a financial perspective, you’re able to buy a service that assures capacity, availability, and performance. Organizations can plan how much money is needed every year, instead of seeing a huge spike in the budget five years from now to replace the network or to replace routing. Plus, most importantly, using Nile services helps free up staff to focus on other initiatives, like classroom technology or digital transformation.”

“Normally, if an institution purchases a technology solution from a vendor, that system is at max performance on day one,” continues Davidoff. “Six months later, your firewall is upgraded, your core router is not at current code, and you added ten new features. Your capacity and features are now starting to degrade. Without the time to take care of all the maintenance that needs to happen, your investment keeps losing value over time.”

Davidoff says many organizations are not sufficiently leveraging automation in order to efficiently run and maintain the network while creating complexity that no human can solve. “We need to eliminate complexity to enable innovation. If we keep this complexity in our environment, every time someone wants to innovate, we need to change all these features and configurations. Many vendors or wireless controllers have hundreds of different features and it’s difficult to develop best practices. We need a deterministic network and not a non-deterministic network in order to predict the performance.”

Partnering with Edge
Recognizing the important role networking infrastructure plays in the evolution of IT, Edge recently released an RFP to prospective vendors who could provide a NaaS to member organizations. The goal was to provide Edge members with NaaS services that allow these institutions to focus on promoting capabilities and skills, while reducing costs, promoting efficiencies, and improving security. Davidoff led Nile’s response to the RFP and was recently awarded a master contract with Edge (CBTS was the other awardee). “The Edge team works hard to stay at the forefront of innovation in the marketplace,” says Dan Miller, Associate Vice President, EdgeMarket. “In the world of enterprise networking, Nile represents an entirely new approach that enables organizations to offload the often overwhelming complexities of network management while reaping the benefits of a financially predictable, highly adaptable, and supremely secure network environment. We’re proud to have Nile as a new awardee and partner in our fast-growing EdgeMarket co-op.”

Nile helps higher education institutions deliver an uninterrupted wired and wireless experience with a performance guarantee for coverage, availability, and capacity. “Nile can help free up capital and resources to focus on meeting the demands of modern education,” says Davidoff. “We want to help institutions deliver on their mission and provide the strategic value that leadership is looking to achieve. Nile aims to help organizations break free from the traditional constraints of legacy network infrastructures and use IT resources to strategically enhance learning in a digital era.”

To learn more about how Nile is helping institutions move beyond the networking status quo, visit nilesecure.com/enterprise-network/higher-education.

View Article in View From The Edge Magazine

The post Empowering Campus Networks appeared first on NJEdge Inc.


Reorganizing, Restructuring, and Revisioning Your IT Organization for Digital Transformation

The post Reorganizing, Restructuring, and Revisioning Your IT Organization for Digital Transformation appeared first on NJEdge Inc.

As education institutions and public sector organizations continue to navigate through the critical process of adapting their IT organizations for the digital age, many look for innovative ways to align team members and streamline processes to help advance these objectives. To create an effective strategy, Christopher Markham, Executive Vice President and Chief Revenue Officer, Edge, says starting with a few basic questions can help frame the conversation in how to move forward. “An important question to begin with is how does your organization view information technology? Do you view IT more as an engineering operation or as a service operation? Leadership must also determine if IT is viewed as an art or science, because there are plenty of institutions where IT is expected to be the primary change agent or innovator, not just in the administrative side of the house, but in educational technologies.”

“Organizations should also explore their return on investment from IT, including technology assets and staff,” continues Markham. “Do you have a return on investment and a rate of return? In addition, leadership must explore if technology is informing the business process both on the administrative and academic side, or is technology being informed by those business processes.” Achieving alignment across an IT organization involves several core axioms, including:

Authority and accountability must match Specialization and teamwork Precise domains and clear boundaries The basis of a substructure Avoid conflicts of interest Cluster by professional synergies Business within a business

“The golden rule is that authority and accountability in an IT organization must match,” says Markham. “You want to define clear boundaries with no overlaps or gaps and divide a function into groups based upon its strengths. In addition, cluster groups under a common leader based on similar professions. Institutions must also view higher education and information technology as a business. Faculty, students, and staff are considered customers and every manager is an entrepreneur. An entrepreneur is anyone who brings together all the different pieces to ensure service delivery of IT and high-quality services and solutions.”

“IT governance, funding and financial management, and enterprise data and data governance are among the top technology-related domains that impact digital transformation readiness.”

— Christopher Markham
Executive Vice President and Chief Revenue Officer, Edge

Achieving Digital Transformation Readiness
The first principle of aligning authority and accountability is of top importance and what Markham calls the golden rule in IT organizational design. “This alignment is essential to the success of every IT organization and institution that it is serving. In a particular case study, a CIO appointed a few process owners at the suggestion of process improvement consultants. Each was assigned a process that engaged people from various parts of the organization in producing a specific service. These process owners had authority over those processes, and while they were collaborative and involved stakeholders in designing and implementing the new processes, they were not process facilitators who served others by bringing teams to consensus on how they’ll work together. Process owners didn’t have matching accountability for their effectiveness of those processes and weren’t always the individuals accountable for delivering those services. They were accountable for the delivery of services, but they didn’t have the power to determine the processes they used to do their jobs.”

“If these service delivery groups failed, there was no way to know whether it was due to their own poor performance or due to a bad process,” continues Markham. “Nonetheless, they took the blame. Process owners implemented detailed, rigorous business processes and succeeded at their mission, but the organization became bureaucratic, slow, and inflexible as a result. This structure violated the golden rule. In re-envisioning and restructuring an IT organization, the CIO needs to decide the rules of the game and create the organizational ecosystem, including the group structure, the resource governance process, and the culture.”

Increasing the Pace of Innovation
Once the right structure is in place, leaders can take the opportunity to adjust domains as needed, arbitrate any disputes, create a healthy environment for teamwork, and develop talent through recruiting, inspiring, and coaching efforts. “Leaders should manage performance including negotiating staff’s objectives, giving frequent feedback, measuring the results, deciding rewards, and managing performance problems,” says Markham. “CIOs can leverage performance programs and evaluations to restructure, reorganize and incentivize.  They must also manage commitments and resources which includes assigning work within the group and coordinating shared decisions, like common methods and tools and professional practices. In addition, the CIO must make decisions when consensus cannot be reached.”

Markham shares another case study where the CIO in a large insurance company was tasked with addressing complaints from the business executives regarding the IT department’s opacity, unresponsiveness, and poor understanding of their business strategies. “The leadership in this organization was frustrated that they couldn’t control IT’s priorities and did not understand why many of the requests were not being fulfilled. There was a trend toward decentralization and many business units had started their own small IT groups, which the CIO disparagingly called Shadow IT. These groups only existed because business units did not want to do business with corporate IT. In response, the CIO dedicated a group to each business unit and divided his engineering staff among them. Each group was relatively self-sufficient with all the skills needed to deliver.”

“The senior managers also served as the primary liaisons to those business units,” continues Markham. “The CIO felt this structure would appease the business units and stave off further decentralization, while holding senior managers accountable for business results and client satisfaction. Unfortunately, technical specialists were needed throughout the organization, and since technology subspeciality was scattered among the various client-dedicated groups, this limited their professional exchange. When the sales team, for example, ran into technical challenges, they may not have known that someone in another group already had encountered that issue and knew a solution. Their peers were busy with other priorities, costs rose, and response time slowed, and everyone was reinventing solutions to common problems. Meanwhile, there was little impetus for standards, and individual teams built systems that were optimal for their specific clients, not for the enterprise as a whole.”

Markham continues, “The pace of innovation also slowed, and the organization could not hire an expert in an emerging technology until demand grew across the whole enterprise. As a result, business opportunities to build customer loyalty were missed and the impacts extended beyond IT’s performance. Over time, the structure led to multiple general ledger systems and multiple records for the same customer. Synergies were lost as the company lost a single view of its customers, resources, and suppliers.”

Including productivity specialists can bring efficiency to an IT organization which can translate into cost savings for return on investment. “Specialists have ready answers and don’t have to climb the learning curve with each new challenge,” says Markham. “Quality specialists know the latest methods and technologies in their field and understand how their products are more capable and have lower lifecycle costs. Competence and experience deliver results with fewer risks. Innovation specialists can keep up with the literature and be the first to learn about emerging technologies and techniques.  As a result, the pace of innovation improves. Since they are confident in their abilities, specialists experience less stress, are more productive, and are more likely to excel in their career.”

 “An important question to begin with is how does your organization view information technology? Do you view IT more as an engineering operation or as a service operation? Leadership must also determine if IT is viewed as an art or science, because there are plenty of institutions where IT is expected to be the primary change agent or innovator, not just in the administrative side of the house, but in educational technologies.”

— Christopher Markham
Executive Vice President and Chief Revenue Officer, Edge

Driving Organizational Change
Creating an IT strategy that optimizes processes and technology and fosters a culture of innovation includes several domains of enterprise architecture. “IT governance, funding and financial management, and enterprise data and data governance are among the top technology-related domains that impact digital transformation readiness,” says Markham. “Each of these domains represent specializations of the IT reference disciplines or olive branches from those IT reference disciplines, and the business architecture is an olive branch with each of the functional offices in both administration and academics. But without labeling these domains properly as a CIO, it’s very difficult to reorganize, restructure, or re-envision your organization. The cost of overlapping these domains and clustering by professional synergies is reduced specialization, redundant efforts, confusion, product disintegration, less innovation and teamwork, and lack of entrepreneurship.”

Edge’s E360 assessment is designed to provide a holistic, 360-degree view of an institution’s current-state technology program with a focus on the technology-related domains. Taking a diagnostic and prescriptive approach to evaluating the technology organization, Edge looks at four key areas. “We first identify any unreliable processes and if there is reduced specialization as a result of these gaps,” explains Markham. “We also look if that reduced specialization leads to conflicts of interest. The E360 assessment also focuses on the professional exchange between the domains, if there are domain overlaps, the level of coordination, and whether it is a whole business. Lastly, we explore the substructure and the results of reduced specialization, domain overlaps, and inappropriate biases. E360 produces a final report that not only includes outcomes and analysis, but a three-year roadmap for an IT organization to drive organizational change, improve their technology landscape, and achieve digital transformation goals successfully.”

Ready to achieve operational efficiency and digital transformation? Learn more at njedge.net/solutions-overview/digital-transformation

View Article in View From The Edge Magazine

The post Reorganizing, Restructuring, and Revisioning Your IT Organization for Digital Transformation appeared first on NJEdge Inc.


Edge Colocation Services

The post Edge Colocation Services appeared first on NJEdge Inc.

In an age where data collection and analysis continue to grow in importance in nearly every industry, many organizations seek innovative and affordable ways to store data and expand their networking capabilities. In the education community, not every institution is equipped with a large IT infrastructure or the space to host servers, networking equipment, and data storage. To help address this need, Edge offers affordable colocation services where member institutions can receive data center support and colocation space for disaster recovery and business continuity. “Colleges and universities have always had the responsibility to design, build, and run data centers on college campuses,” says Bruce Tyrrell, Associate Vice President Programs & Services, Edge. “Unfortunately, the physical infrastructure, including commercial power, backup generators, and environmental services are extremely expensive and complex to deploy, especially in a typical college campus environment that was not designed for these requirements. Our colocation services are an enhancement of our members’ existing connectivity to the Edge network. By leveraging their existing last mile connections, members have the ability to place hardware at one of several locations around the region.”

With Edge maintaining high availability colocation data centers throughout the Northeast region, several members are choosing to exit the owned data center space and move their hardware to an off-campus location. “Many institutions are relocating hardware to a purpose-built facility that has been professionally engineered and constructed with the desired features,” says Tyrrell. “Access to these features is included in the monthly recurring costs for space outsourcing and using a colocation provider can help reduce the need for additional staff to handle the physical management of those environments.”

Benefits of Colocation
From their optical network, Edge can build connections for members from their campuses directly into the colocation facilities. “Member institutions can choose to place hardware infrastructure at the enterprise grade colocation facility on the campus of Montclair State University at a significant discount over commercial space,” explains Tyrrell. “Colocation is available along our optical network and provides access to 165 Halsey Street in Newark, New Jersey Fiber Exchange (NJFX), which is in Wall Township adjacent to the Tata international cable landing station, and at 401 N Broad Street in Philadelphia. Members can also access the Digital Realty colocation at 32 Avenue, the Americas in Manhattan. Edge is expanding our colocation capability by adding the colocation facility at Data Bank in Piscataway, New Jersey, a bespoke water-cooled facility designed with High Performance Computing in mind.”

Colocation data centers allow members to store their equipment in a secure location with a public IP address, bandwidth, and power availability. These locations also include backup power in the event of an outage. “An organization can use Edge colocation services to extend their internal infrastructure into a professional collocation space from an end user point of view,” says Tyrrell. “The Edge model is unique in that the bandwidth provided to our members is not shared with any other organization, and since this extension is transparent, students, faculty, and staff do not realize their data is traveling off campus and out to a data center and back—the data transfer only takes microseconds.”

With Edge as the provider of the bandwidth, both internally connected to the campus, as well as externally via their internet connections, these connections are designed to scale and burst. “Unlike a cloud environment, where there is an increased cost for bursting when an organization’s computing resources reach their max, a colocation environment offers costs that are fixed,” explains Tyrrell. “An organization rents a cabinet and purchases hardware to store in this cabinet. Edge fixes the cost of transport and internet capacity which can allow for greater budget predictability. This is different from the Cloud, where once an application is placed in the Cloud, upticks in utilization for those apps can have a direct impact on the monthly expense to operate those services. For some institutions, having a fixed monthly budget for colocation services is easier to operationalize from a financial perspective.”

“Unlike a cloud environment, where there is an increased cost for bursting when an organization’s computing resources reach their max, a colocation environment offers costs that are fixed,” explains Tyrrell. “An organization rents a cabinet and purchases hardware to store in this cabinet. Edge fixes the cost of transport and internet capacity which can allow for greater budget predictability. This is different from the Cloud, where once an application is placed in the Cloud, upticks in utilization for those apps can have a direct impact on the monthly expense to operate those services. For some institutions, having a fixed monthly budget for colocation services is easier to operationalize from a financial perspective.”

— Bruce Tyrrell
Associate Vice President Programs & Services, Edge

Onboarding and Support
When an institution selects colocation services, Edge’s engineers help walk the member’s IT team through the ins and outs of the processes and can accompany them to colocation facilities to familiarize them with the data centers. “Edge acquires the space, coordinates the connectivity, and assists in providing remote and physically secured access to the cabinets or gauges,” says Tyrrell. “We also handle all the administrative pieces like billing and passing along clean invoices to the member. Since colocation facilities can often be complex and intimidating, Edge can visit the facilities with you during the onboarding process.”

“Colocation is a unique environment that can be complex from both an operational and an acquisition perspective,” continues Tyrrell. “Edge has decades of experience in operating these environments and we stand ready to assist our members with transitioning hardware and application into these professionally maintained tier three colocation facilities. Once the transition has been made, members are better positioned to weather the storms and unforeseen outage conditions that have been known to impact on campus data centers. This resilient infrastructure can provide peace of mind and a cost-friendly way to optimize resources and meet the growing demands of today’s higher education community.”

To learn more about Edge’s colocation services and how to take advantage of the latest and greatest developments in networking technology, visit njedge.net/solutions-overview/network-connectivity-and-internet2.

View Article in View From The Edge Magazine

The post Edge Colocation Services appeared first on NJEdge Inc.


Navigating AI-Powered Education and the Future of Teaching and Learning

The post Navigating AI-Powered Education and the Future of Teaching and Learning appeared first on NJEdge Inc.

With the age of artificial intelligence (AI) well underway, how we work, learn, and conduct business continues to transform and open the door to new opportunities. In the classroom, AI can be a powerful teaching tool and support innovative and interactive learning techniques and critical thinking. Dr. C. Edward Watson, Associate Vice President for Curricular and Pedagogical Innovation with the American Association of Colleges and Universities (AAC&U) and formerly Director of the Center for Teaching and Learning at the University of Georgia, explores how AI is revolutionizing the future of learning and how educators can adapt to this new era of human thinking in his new book, Teaching with AI: A Practical Guide to a New Era of Human Learning (Johns Hopkins University Press).

“AI is a significant game changer and is presenting a new challenge that is going to be dramatically different from past disruptive innovations,” says Watson. “Goldman Sachs and other sources estimate that two-thirds of U.S. occupations will be impacted by AI.1 With a vastly accelerating expectation within the workforce that new graduates will be able to leverage AI for work, there is a growing pressure on institutions of higher education to ensure students become well-versed in AI techniques. This new learning outcome for higher education is being termed AI literacy.”

AI is also introducing a new academic integrity challenge including how to accurately determine if students are using AI to complete assignments. Along with Teaching with AI co-author, José Antonio Bowen, Watson explores crucial questions related to academic integrity, cheating, and other emerging issues in AI-powered education. “The real focus of the book is how to create assignments and practices that increase the probability that students will engage with the work rather than turn to AI, as well as ways to partner with AI and use these tools in meaningful and impactful ways. Instead of fearing AI and how students may misuse it, the education community must employ logical pedagogical practices within the classroom that encourage our students to become competent partners with AI, including building AI literacy skills that will help them on their future career paths.”

“I look forward to discussing the higher education landscape at EdgeCon and exploring suggestions for how we might move forward. We need to acknowledge that AI is going to be an important thread in the education and research industries. Disruption is not always a bad thing, especially in the workforce. AI can help improve efficiencies, reduce costs, increase productivity, and create new job opportunities. In the higher education setting, these tools have the potential to offer personalized learning experiences, strengthen retention, and resolve accessibility issues. Along with the potential challenges this type of technology may introduce, we must also look at the positive opportunities that will arise and how we can better prepare our students for the world that is already waiting for them.”

— Dr. C. Edward Watson
Associate Vice President for Curricular and Pedagogical Innovation,
American Association of Colleges and Universities (AAC&U)

AI in the Classroom and Beyond
With over twenty-five years of experience in faculty and instructional development, Watson is nationally recognized for his expertise in general education, active learning, classroom practice, course design, faculty development, student learning, and learning outcomes assessment. “I believe in the transformative opportunities that higher education can provide individuals, especially first-generation students like myself,” shares Watson. “When I entered a master’s program in English, I became increasingly interested in the puzzle of how learning works. I wanted to better understand how to make learning more meaningful for students, how to engage them, and how to ensure what I’m teaching is not just memorized for an exam, but will be remembered and utilized long after the course is completed. As I advanced in my career, I was able to take what I learned helping students in my own classroom to provide programming and opportunities that could benefit the breadth of higher education.”

Even though change can be slow within the education community, Watson says the dramatic, fast shifts happening in the industry are causing many institutions to take notice. “Unfortunately, as higher education begins to adapt, AI is creating new digital inequities. Many institutions are struggling to determine how to best serve their students given the new challenges and opportunities. Institutions will need leaders who continue to explore how advancements like AI are changing their world and the ways in which they can harness and manage AI as a powerful teaching tool.”

“To begin to understand AI and its capabilities, I recommend that faculty copy and paste a current assignment into two or three different AI tools to better understand the opportunities, restrictions, and surprises. This can provide insight into ways to improve the assignment and to make it better aligned with the way students might be expected to complete similar work in the real-world post graduation. I think going forward, we will see AI more deeply integrated within systems we already depend upon. For instance, within learning management systems (LMS), it’s foreseeable that when tudents submit assignments, the AI-assisted LMS will check for AI, plagiarism, and may even grade and provide customized feedback using a faculty designed rubric.”

From a teaching perspective, AI can also be beneficial in helping instructors create rubrics and improve the quality of their course syllabus and assignments. “I hope more faculty look at AI as a toolbox, rather than something to fear,” says Watson. “Teachers are still the experts in their field, and AI can help them elevate their courses and find new ways to improve the learning experience. AI is not a search engine; it is more like a knowledgeable colleague. Using it is more about prompt engineering and having a conversation that fine tunes the results. Faculty should see AI as an idea generator that could be leveraged and helpful with many aspects of the classroom and beyond.”

ChatGPT, a chatbot developed by OpenAI and launched in November 2022, is a common AI tool used to automate tasks, compose essays and emails, and have human-like conversations. According to a recent survey conducted by Study.com, 89 percent of students over the age of 18 have used ChatGPT to help with homework, while 48 percent confessed they had used it to complete an at-home test or quiz.2 “While many students are familiar with AI tools like ChatGPT, not all educators are aware of its prevalence, causing a disconnect,” says Watson. “Showing faculty how this tool can be useful is key and encouraging them to have open and honest conversations with students about how AI can be used as a tool of learning, rather than a way to cheat on their schoolwork is now an essential early-in-the-semester conversation. Instead of approaching AI with how it is breaking your pedagogy, consider how AI is relevant for what you would like to accomplish in preparing your students for the future.”

“I hope more faculty look at AI as a toolbox, rather than something to fear. Teachers are still the experts in their field, and AI can help them elevate their courses and find new ways to improve the learning experience. AI is not a search engine; it is more like a knowledgeable colleague. Using it is more about prompt engineering and having a conversation that fine tunes the results. Faculty should see AI as an idea generator that could be leveraged and helpful with many aspects of the classroom and beyond.”

— Dr. C. Edward Watson
Associate Vice President for Curricular and Pedagogical Innovation,
American Association of Colleges and Universities (AAC&U)

Adapting Higher Education in a New Era
With a theme of Excelling in a Digital Teaching and Learning Future, EdgeCon Spring 2024 will welcome Dr. Watson as a keynote speaker to explore how higher education is evolving and ways to overcome the challenges the industry is facing. “A recent Gallup survey shows a steep decline in how higher education is perceived in this country3,” says Watson. “Less than half of Americans have confidence in higher education. All of us within our industry should consider how we can positively impact this national perception of higher education as there are ramifications. Not preparing students for what will certainly be an AI-enhanced career or recklessly using AI detection tools in ways that might unjustly accuse significant numbers of students of cheating can be significantly dangerous for higher education. Combining such practices with the ongoing student debt crisis and a politically polarized higher education dynamic, and more and more students will question if higher education is still as important as it once was. Already many ask if higher education is still a cornerstone of the American Dream.”

“I look forward to discussing the higher education landscape at EdgeCon and exploring suggestions for how we might move forward,” continues Watson. “We need to acknowledge that AI is going to be an important thread in the education and research industries. Disruption is not always a bad thing, especially in the workforce. AI can help improve efficiencies, reduce costs, increase productivity, and create new job opportunities. In the higher education setting, these tools have the potential to offer personalized learning experiences, strengthen retention, and resolve accessibility issues. Along with the potential challenges this type of technology may introduce, we must also look at the positive opportunities that will arise and how we can better prepare our students for the world that is already waiting for them.”

View Article in View From The Edge Magazine

The post Navigating AI-Powered Education and the Future of Teaching and Learning appeared first on NJEdge Inc.


Maintaining Quality Online Learning Programs

The post Maintaining Quality Online Learning Programs appeared first on NJEdge Inc.

Creating and sustaining quality online learning experiences has become a top priority across the higher education community and plays a key role in the appeal and competitiveness of an institution. As these online programs are developed and implemented, quality assurance frameworks and processes are essential to ensuring that these programs meet rigorous standards and continue to align with learning objectives. “Having standards that everyone from across an institution has to meet is of paramount importance in higher education,” says Joshua Gaul, Associate Vice President & Chief Digital Learning Officer. “The lack of standards in today’s higher education system is a top reason for the drop in retention and enrollment, especially among community colleges and small private schools. Every organization should ensure their course offerings and entire digital presence meet quality industry standards, including ADA compliance.”

Using Rubrics to Assess Course Quality
To help ensure learners are engaging with high-quality courses, Quality Matters (QM) is among the most well-known programs for creating a scalable process for quality assurance. “QM is a global organization leading quality assurance in online and digital teaching and learning and is used to impact the quality of teaching and learning at a state and national level,” says Gaul. “QM has eight general standards and 42 total standards. More than 1,500 colleges and universities have joined the Quality Matters community and they’ve certified thousands of online and hybrid courses, as well as trained over 60,000 education professionals, including myself, on online course design standards.”

The SUNY Online Course Quality Review Rubric (OSCQR) is another well-respected online design rubric, used and developed by SUNY Online, in collaboration with campuses through the SUNY system. “With six general standards and 50 total standards, the QSCQR is openly licensed for anyone to use and adopt and aims to support continuous improvements and quality accessibility in online courses,” explains Gaul. “The rubric and the online course review and refresh process support large scale online course design efforts systematically and consistently. The goal is to ensure that all online courses meet a quality instructional design and accessibility standard, and are regularly and systematically reviewed, refreshed, and improved to reflect campus guidelines and research based online effective practices.”

“In addition to QM and OSCQR, there are many other rubrics being used to systematically check courses against,” continues Gaul. “No matter which rubric you are using, it’s important to have accountability and a knowledge sharing process about these standards across the entire institution.”

Implementing an Evaluation Cycle
Regardless of the program being used to conduct online course quality review, developing an evaluation cycle is essential to ensuring courses are meeting key standards. “The first step in implementing an evaluation cycle is gathering data and understanding the trends of your organization,” says Gaul. “What is the enrollment frequency, what courses have high enrollment, how many students fail or drop out? In classes that have very low enrollment or high drop rates, what are their barriers to success? Institutions should review the disciplines and courses with the highest enrollment and which courses should be evaluated and revised on a more frequent basis. Looking at the data closely can provide valuable insight into the effectiveness and quality of each online course.”

In between offerings, institutions should take stock of online courses as a whole and reflect on ways to enhance course content, engagement, and student outcomes. During this assessment, important questions to ask include:

Does the course learning environment welcome and include all students? Is engagement encouraging? Are there opportunities for self-reflection and discussion? Do activities provide opportunities for realistic, relevant, and meaningful application of knowledge? Are students achieving the goals of the course? Is the workload reasonable for both students and the instructor?

By adopting a mission to review and update all courses to ensure the highest quality content and experience, that promise can go a long way in improving the brand of an institution and creating a student centric learning environment that attracts positive attention. To successfully create an evaluation cycle, Gaul says each institution needs a defined project management process. “Each organization should map out a review process that defines individual roles and responsibilities. This should involve instructional designers, librarians, IT services, student support, and academic support. This process should not fall solely on the instructor. If you think of it like building a house, the faculty member is the homeowner, the instructional designer is the general contractor, and IT is your plumbing and electrical. Every person needs to be involved in the planning from day one to ensure a successful build.”

Building a Course Assessment System
Any time an institution begins assessing courses, whether it’s from a system level or individual course level, there are often barriers to overcome. “When technology is involved in instruction, there should be a collaborative effort to identify and overcome any hurdles,” says Gaul. “Technology should never lead the academia; the teaching should lead the technology. We must remember that all students are cognitively different, and this is why Universal Design for Learning (UDL) leans towards accessibility and flexibility and removing barriers to learning. These barriers can include inadequate support, where students do not know where to go for help, whether that’s technical, tutoring, writing style, etc. Access to support must be built into the course in order for students to feel supported and demonstrate emotional intelligence within the class.”

Other common barriers include a lack of a learning community and boredom. Without students feeling connected to the instructor and other classmates, they can become isolated, and without interesting content and delivery, students can feel disengaged. “System barriers we regularly see in regards to course assessments involve implementation,” says Gaul. “Lack of commitment, poor preparation, and inconsistency can all affect the success of a course assessment. Unless there’s some sort of checks and balances, courses are going to be inconsistent, and students are going to have difficulty moving seamlessly between classes if they’re taking more than one online course. The purpose of building a course assessment system is to free up faculty and give them the proper support they need to be successful.”

“Whether a course is fully online, hybrid, HyFlex, or in-person, we can help make sure it meets all the standards of quality technology enhanced instruction. This can provide a level of risk management and quality control that can often get ignored when there’s too much focus on the tools, system recruitment, and retention. Member institutions can also count on web and educational technology support. Edge provides technology and web support service management frameworks and ticketing systems to help with website maintenance and web content management. Most importantly, we can help provide thought leadership in how to implement a systemwide course assessment and revision cycle.”

— Joshua Gaul
Associate Vice President & Chief Digital Learning Officer, Edge

Instructional Design Support
Designing and managing online courses can be a challenging task, especially without the resources and training to do so effectively. Well-versed in instructional design, the Edge team understands digitally-enabled learning environments and how to evaluate online courses against standard industry rubrics. “Edge understands the methodologies, rubrics, and standards that go into the creation of a high quality curriculum,” says Gaul. “We have worked with colleges and universities to conduct evaluations and identify trends we see in their courses. We can also build workshops to help train faculty and students and improve their understanding of why online instruction is different from traditional classroom learning. Specifically, we help prepare staff and students for the challenge of online education through engaging student-centered experiences built to encourage online presence and encouraging active learning methodologies.”

Edge’s course and curriculum evaluation services are designed to help an institution deliver a top-quality product. “Whether a course is fully online, hybrid, HyFlex, or in-person, we can help make sure it meets all the standards of quality technology enhanced instruction,” says Gaul. “This can provide a level of risk management and quality control that can often get ignored when there’s too much focus on the tools, system recruitment, and retention. Member institutions can also count on web and educational technology support. Edge provides technology and web support service management frameworks and ticketing systems to help with website maintenance and web content management. Most importantly, we can help provide thought leadership in how to implement a systemwide course assessment and revision cycle.”

“Our team of experts can help an organization bridge the gap between technology and academia and lead a collaborative effort as opposed to two silos working in competition,” continues Gaul. “We can customize for smaller niche projects, support larger, longer-term initiatives, or become an extension of your team. Edge can provide documentation used in the project and whatever we produce will be owned by the institution, whether it’s a learning object or a series of training modules.”

Gaul says if online courses are not being reviewed and revised regularly, those learning experiences will not make an impact. “Revision cycles that are high quality, trust the data, and have accountability and responsibility are incredibly important to ensuring course content is engaging and impactful. Every institution should look at how their offices work together to create a course evaluation and revision cycle that is beneficial and supportive to the student. As you look for ways to improve your institution, Edge wants to help you transform your instruction, advance your online education, and find powerful ways to improve the way you do business.”

To learn more about optimizing courses for online learning and transforming the student experience, visit njedge.net/solutions-overview/digital-learning.

View Article in View From The Edge Magazine

The post Maintaining Quality Online Learning Programs appeared first on NJEdge Inc.


Hyperledger Foundation

Meet Aries Agent Controller, a New Hyperledger Lab

A code base developed and contributed by Superlogic that facilitates deploying Hyperledger Aries agents in cloud environments is the latest Hyperledger lab. The new lab, Aries Agent Controller, is now officially a part of the Hyperledger ecosystem, and we are excited to work with the broader community to grow it.  

A code base developed and contributed by Superlogic that facilitates deploying Hyperledger Aries agents in cloud environments is the latest Hyperledger lab. The new lab, Aries Agent Controller, is now officially a part of the Hyperledger ecosystem, and we are excited to work with the broader community to grow it.  


Identity At The Center - Podcast

We are thrilled to announce a new Sponsor Spotlight on the I

We are thrilled to announce a new Sponsor Spotlight on the Identity at the Center podcast! We had the pleasure of hosting Marco Venuti, Director of IAM Business Acceleration for Thales, and Jason Keenaghan, Director of IAM Product Management for Thales. In this episode, we explore the Thales Cloud Security OneWelcome Identity Platform and its comprehensive solution for managing digital identitie

We are thrilled to announce a new Sponsor Spotlight on the Identity at the Center podcast! We had the pleasure of hosting Marco Venuti, Director of IAM Business Acceleration for Thales, and Jason Keenaghan, Director of IAM Product Management for Thales.

In this episode, we explore the Thales Cloud Security OneWelcome Identity Platform and its comprehensive solution for managing digital identities. We dive deep into the world of B2B IAM and discuss its differences from B2C and B2E IAM.

You can listen to the episode on IDACPodcast.com or in your favorite podcast app. Don't miss out on the insights and expert perspectives straight from the source!

A big thank you to Marco and Jason for joining us and sharing their valuable knowledge.

#iam #podcast #idac

Wednesday, 28. February 2024

Next Level Supply Chain Podcast with GS1

Behind the Barcode: Mastering 2D Barcodes with GS1 US's Gena Morgan

Keeping track of product information and inventory with multiple barcode types can be tricky for businesses.  Gena Morgan, who leads the standards team at GS1 US, shares valuable insights into the world of barcodes, specifically focusing on the transition from traditional 1D barcodes to 2D barcodes and the importance of GS1 standards in driving industry adoption. Gena explains the technical

Keeping track of product information and inventory with multiple barcode types can be tricky for businesses. 

Gena Morgan, who leads the standards team at GS1 US, shares valuable insights into the world of barcodes, specifically focusing on the transition from traditional 1D barcodes to 2D barcodes and the importance of GS1 standards in driving industry adoption. Gena explains the technical differences between traditional linear barcodes and 2D barcodes, such as QR codes and GS1 DataMatrix, highlighting the increased data capacity and smaller footprint of 2D barcodes. 

She elaborates on the potential consumer and business benefits, emphasizing the ability of 2D barcodes to provide more accurate and direct information to consumers, streamline supply chain processes for brands and retailers, and enable functionalities such as product recalls and promotions. The discussion delves into the challenges and opportunities presented by the transition to 2D barcodes, as well as the support and resources available for brands looking to embark on this journey. Gena's expertise on the subject makes for an enlightening and informative conversation, encouraging businesses to consider the advantages of 2D barcodes and GS1 standards in their operations.

 

Key takeaways: 

 The transition from traditional barcodes to 2D barcodes allows brands to provide information to consumers and tailor experiences. 

The adoption of 2D barcodes in the industry allows products to carry more data in a smaller footprint.

GS1 US supports brands transitioning to 2D barcodes and GS1 digital link standards with pilot programs and toolkits. 

 

Connect with GS1 US:

Our website - www.gs1us.org

GS1 US on LinkedIn

 

Connect with guest:

Gena Morgan on LinkedIn

 

Resources:

Learn More About 2D Barcodes

Resources for the Transition from 1D to 2D Barcodes

Fresenius Kabi Infuses Safety from Production to Patient with Unit-of-Use 2D Barcodes

 

Tuesday, 27. February 2024

Hyperledger Foundation

Building Better Together: Insights from the Governing Board to Mark Hyperledger Foundation’s 8th Anniversary

As a follow-up to Hyperledger 8: A Celebration of Building Better Together, Daniela Barbosa asked our Governing Board Representatives for their take on the success and value of Hyperledger Foundation as well as the technical priorities they see for the community. 

As a follow-up to Hyperledger 8: A Celebration of Building Better Together, Daniela Barbosa asked our Governing Board Representatives for their take on the success and value of Hyperledger Foundation as well as the technical priorities they see for the community. 


Oasis Open Projects

OASIS Members to Advance Global Standard for Computing Ecosystem Supply Chain Data Exchange

Boston, MA – 27 February 2024 – Members of OASIS Open, the international open source and standards consortium, have formed the Computing Ecosystem Supply Chain Technical Committee (CES-TC). Leaders in the computing and semiconductor industries established the TC with aims to revolutionize global supply chain dynamics through standardized data exchange. With digital transformation rapidly reshaping

Cisco, Hewlett Packard Enterprise, Intel, Micron, Microsoft, U.S. NIST, SAP, and Others to Develop Use Cases, Standards, and APIs that Enable End-to-End Visibility for Supply Chains

Boston, MA – 27 February 2024 – Members of OASIS Open, the international open source and standards consortium, have formed the Computing Ecosystem Supply Chain Technical Committee (CES-TC). Leaders in the computing and semiconductor industries established the TC with aims to revolutionize global supply chain dynamics through standardized data exchange. With digital transformation rapidly reshaping industries and systems worldwide, the imperative for seamless data exchange has never been more pronounced.

This collaborative endeavor highlights the consensus in the computing ecosystem that digital transformation requires standardized data exchange among member companies over a network. The TC will focus on developing use cases, data schemas and ontologies, and APIs that enable end-to-end visibility for supply chains. The TC’s work will facilitate building resilient capacity, trusted hardware and software, secure systems, and sustainable practices to benefit all customers and end-users.

“Standardization plays a pivotal role in establishing secure and sustainable systems, which are crucial for the evolving digital landscape,” noted Joaquin Sufuentes, CES-TC co-chair, of Intel. “As the CES-TC sets its course, it signifies the collective dedication of OASIS members to lead the charge in technological advancement that directly enriches industries and end-users. The TC’s work will extend to smart contracts that drive logic functions, process automation, and role-based entitlements within the blockchain context.”

“TC contributions will focus on the data schemas and ontologies that define the attributes and entities and a REST API model for putting the data into and getting the data from blockchain or other distributed infrastructure,” said Tom Dodson, CES-TC co-chair, of Intel. “Through standardized approaches, we are empowering industries with the tools necessary to navigate the complexities of the digital age.”

Participation in the OASIS CES-TC is open to all through membership in OASIS. The profile for the types of contributors to the CES-TC include business stakeholders responsible for product delivery, technical experts managing integrations, supply chain professionals, data specialists focusing on ontologies, government representatives concerned with traceability, and industry professionals driving digital transformations.

Support for the CES-TC
Cisco
“The OASIS CES-TC represents a great advancement in standardizing and securing the supply chain of the digital age. By focusing on the development of universally accepted data schemas, APIs, and smart contract specifications, this effort is laying the groundwork for transparency, efficiency, and security in supply chain management. I fully support CES-TC’s efforts to create a more resilient and trustworthy digital ecosystem.”
– Omar Santos, Distinguished Engineer, Cisco | OASIS Board of Directors

Intel
“Working as an ecosystem for the benefit of customers and end users of our computing products requires that we operationalize how we collaborate with data in real time to build more efficient operations and new revenue services. We want to standardize and scale the ability to share the right data and signals.”
-Paul Dumke, Senior Director, Ecosystem Strategy & Operations, Intel Corporation

Micron
“The storage and memory business is complex and competition is fierce. Micron’s success depends on our ability to innovate, and with more than 50,000 lifetime patents, we take innovation very seriously. The value chain ecosystem is no exception. Ecosystem innovation is the next frontier and Micron is thrilled to be on this journey with our fellow CES-TC members.”
-Matt Draper, Senior Director of Micron Supply Chain Optimization

Additional Information
CES Project Charter

The post OASIS Members to Advance Global Standard for Computing Ecosystem Supply Chain Data Exchange appeared first on OASIS Open.


Origin Trail

The ON TRAC(k) podcast returns! Episode 2 on Delegated Staking, AI Agents, & More

We’re excited to announce that the ON TRAC(k) podcast will return on February 29th at 16:00 CET with a brand new episode on delegated staking, AI agents, and more. Hosted by Jonathan DeYoung, (who you may know already as co-host of Cointelegraph’s The Agenda) and recorded live, the second episode of the ON TRAC(k) podcast features a special guest — Martin Köppelmann, co-founder and CEO of Gn

We’re excited to announce that the ON TRAC(k) podcast will return on February 29th at 16:00 CET with a brand new episode on delegated staking, AI agents, and more.

Hosted by Jonathan DeYoung, (who you may know already as co-host of Cointelegraph’s The Agenda) and recorded live, the second episode of the ON TRAC(k) podcast features a special guest — Martin Köppelmann, co-founder and CEO of Gnosis! Martin will join the three co-founders of OriginTrail, Žiga Drev, Branimir Rakić, and Tomaž Levak to discuss a Verifiable Internet for AI & more.

Take this opportunity to tune in to a live conversation between industry pioneers and thought leaders here.

In case you missed it

Last time around, Jonathan DeYoung spoke with the OriginTrail co-founders, about the significance of OriginTrail’s V8 Foundation, explored its robust partnerships, and shed light on the ecosystem’s key initiatives including knowledge mining and staking.

If you missed out on watching this episode live, you can watch it back on the OriginTrail YouTube channel or listen wherever you consume your favourite shows.

And, if you’re curious about OriginTrail’sV8 foundation, you can read more here.

Future Episodes

The On TRAC(k) podcast will continue to bring you the latest and most innovative ideas and advancements both in the OriginTrail ecosystem and beyond. We’re giving our listeners exclusive insights into the world of blockchain and Web3 as we develop the technology that empowers brands and builders alike with verifiable, decentralized knowledge through AI and DKG technology.

Here at the On TRAC(k) podcast, we’re lucky to have such a vibrant, curious community of listeners, and we want to give you a listening experience that matches the cutting-edge ideas and excitement in our community. That’s why we’re making you a part of the podcast. Ahead of each episode, you’ll have the chance to submit questions that delve deeper into the things you want to learn more about.

We’re excited to reveal our upcoming guests and topics to you further down the line. To keep up to date with all announcements and upcoming episodes, don’t forget to follow OriginTrail on X and, of course, subscribe to On TRAC(k) wherever you get your podcasts.

Climb aboard and welcome to the OriginTrail community. Together, let’s explore, learn, and shape the future.

About OriginTrail

OriginTrail is an ecosystem-building decentralized knowledge infrastructure for artificial intelligence (AI). With the mission of tackling misinformation, which is exacerbated with AI adoption, OriginTrail enables verifiably tracking origins of information, discoverability, and integrity of knowledge to enable trusted AI. It has various applications in the domains of real-world assets (RWAs), search and recommendation engines, question-answering systems, and generally knowledge-dependent applications (such as AI systems).

OriginTrail’s initial adoption was in global supply chains, serving as a trusted hub for supply chain data sharing, allowing customers to authenticate and track products and keep these operations secure. In recent years, the rise of AI has not only created unprecedented opportunities for progress but also amplified the challenge of misinformation. OriginTrail also addresses this by functioning as an ecosystem focused on building a trusted knowledge infrastructure for AI in two ways — driving discoverability of the world’s most important knowledge and enabling the verifiable origin of the information. The adoption of OriginTrail in various enterprise solutions underscores the technology’s growing relevance and impact across diverse industries including real-world asset tokenization (RWAs), the construction industry, supply chains, healthcare, metaverse, and others.

OriginTrail is creating a Verifiable Web for decentralized AI by empowering world-class brands and builders. It utilizes its unique Decentralized Knowledge Graph and OriginTrail Parachain to deliver AI-powered search and solutions for enterprises and individuals worldwide.

OriginTrail has gained support and partnerships with world-class organizations such as British Standards Institution, SCAN, Polkadot, Parity, Walmart, the World Federation of Hemophilia, Oracle, and the EU Commission’s Next Generation Internet. These partnerships contribute to advancing OriginTrail’s trusted knowledge foundation and its applicability in trillion-dollar industries while providing a verifiable web of knowledge important in particular to drive the economies of RWAs.

Web | On TRAC(k) Podcasts | X | Facebook | Telegram | LinkedIn | GitHubDiscord

The ON TRAC(k) podcast returns! Episode 2 on Delegated Staking, AI Agents, & More was originally published in OriginTrail on Medium, where people are continuing the conversation by highlighting and responding to this story.


Digital ID for Canadians

First DIACC PCTF-Certified Service Provider Trustmark Granted

Confirming ATB Ventures’s Oliu service PCTF Privacy Component conformance 

Feb. 27, 2024 – Vancouver – We are thrilled to announce that  ATB Ventures’s  Oliu has been certified against the Pan-Canadian Trust Framework (PCTF) Privacy Component. Established in 2012, DIACC is Canada’s largest and most diverse multistakeholder organization, fostering confidence and consistency in the digital trust and identity services market through its internationally recognized PCTF and standardized third-party conformity assessment program. 

Being the first DIACC PCTF-certified service provider is a significant milestone and a unique leadership opportunity.  DIACC PCTF certification provides an assurance signal to the market, indicating that a service fulfills specified requirements. 

The PCTF comprises a set of rules that offers a versatile code of practice and risk assessment approach that organizations agree to follow, which includes best practices, policies, technical specifications, guidance, regulations, and standards, prioritizing interoperability, privacy, security, and trustworthy use of digital identity and personal data. 

ATB’s Oliu, an Identity verification and authentication platform, has been subject to certification for the PCTF, including a point-in-time audit conducted by DIACC Accredited Auditor KUMA and an independent committee review for quality assurance. Oliu demonstrated conformity to the PCTF Privacy conformance criteria, meeting the applicable requirements. Based on the conformity assessment process results, DIACC has issued a three-year cycle Trustmark subject to annual surveillance audits and added ATB Oliu to the DIACC Trusted List – an authoritative trust registry of DIACC PCTF-certified service providers. 

“This certification begins an exciting journey in providing certainty to the market through trusted services subject to DIACC’s certification program, designed around ISO/IEC 17065,” said DIACC President Joni Brennan.  “For Oliu, achieving the certification demonstrates its commitment to providing trustworthy and reliable digital identity verification services and advancing secure and interoperable digital trust and identity services in Canada.“

About DIACC

Established in 2012, DIACC is a non-profit organization of public and private sector members committed to advancing full and beneficial participation in the global digital economy by promoting PCTF adoption and conformity assessment. DIACC prioritizes personal data control, privacy, security, accountability, and inclusive people-centered design.

To learn more about DIACC, please visit https://diacc.ca/ 

ABOUT OLIU™

Oliu is a blockchain-identity management solution that makes it easy for businesses to issue, manage, and verify digital credentials. Built on open (W3C) standards, Oliu leverages identity frameworks such as the Pan-Canadian Trust Framework (PCTF) and National Trust and Identity Fundamentals to make mobility and interoperability between identity systems possible.

To learn more about Oliu, please visit https://oliu.id/ 

About ATB Ventures™

ATB Ventures is the research and innovation arm of ATB Financial, a leading Alberta-based financial institution. Driving growth at the edges and exploring opportunities beyond financial services, ATB Ventures focuses on helping companies bridge the gap between consumers’ increasing concerns about privacy and security, and their desire for more advanced personalized experiences. 

To learn more about ATB Ventures, please visit https://atbventures.com/ 

Monday, 26. February 2024

FIDO Alliance

EMVCo and FIDO Alliance Provide Essential Guidance on Use of FIDO with EMV 3DS

As leaders in authentication and payments spaces respectively, the FIDO Alliance and EMVCo collaborate to provide guidance on how FIDO authentication can be incorporated in payment use-cases allowing merchants, acquirers/PSPs […]

As leaders in authentication and payments spaces respectively, the FIDO Alliance and EMVCo collaborate to provide guidance on how FIDO authentication can be incorporated in payment use-cases allowing merchants, acquirers/PSPs and issuers to have a consistent way to submit and process FIDO authentication data.  

EMVCo released a white paper with FIDO Alliance’s inputs, “EMV® 3-D Secure White Paper – Use of FIDO® Data in 3-D Secure Messages,” which explains how the use of FIDO authentication data in EMV 3DS messages can streamline e-commerce checkout while reducing friction for consumers. 

Authentication flows are evolving, and merchants are increasingly building seamless experiences based on FIDO standards for device-based authentication, where a trusted device is bound to a payment credential to ensure the credential is being used by the verified cardholder. Consequently, it has become apparent that in some scenarios the issuer may require more data to assess risk and validate the authentication cryptographically. 

This paper addresses these scenarios by providing a data structure that allows for a chain of trust to be established between cardholder authentication, FIDO enrolments and FIDO authentication, hence giving issuers increased control and insight into the authentication process as well as validate authentication. 

In the EU, where payment authentication is required as per PSD2 SCA, this industry-wide guidance can provide assistance to enabling more device-based authentication in a standardized way using globally known authentication standards such as FIDO while using widely accepted authentication rails such as EMVCo.

Read the full white paper on the EMVCo website to learn more.


Oasis Open Projects

The Importance of Open Standards for Data Interoperability

The use of open standards in data interoperability is crucial for enhancing governance not only in the European Union but globally. Open standards determine the format, storage, and exchange of data and enable different organizations and systems to communicate seamlessly. This is especially vital for the EU, with its diverse member states and institutions, where […] The post The Importance of Op

By Francis Beland, Executive Director, OASIS Open

The use of open standards in data interoperability is crucial for enhancing governance not only in the European Union but globally. Open standards determine the format, storage, and exchange of data and enable different organizations and systems to communicate seamlessly. This is especially vital for the EU, with its diverse member states and institutions, where open standards ensure free and secure data flow across borders, enabling better coordination and cooperation in implementing healthcare, trade, environmental protection, and security policies.

Furthermore, open standards uphold the principles of transparency and democracy, enabling citizens’ access to governmental data and enhancing public accountability, thereby promoting civic engagement. From an economic standpoint, open standards foster innovation, facilitate cross-border business operations and drive economic growth. Moreover, they help address global challenges such as climate change and pandemics, allowing effective data sharing and collaboration among nations.

OASIS Open interoperability standards are pivotal in ensuring data protection, privacy, and security while harmonizing technological infrastructures. Our standards are vital for the EU and other governments to fully leverage data interoperability’s benefits in an increasingly interconnected world.

The post The Importance of Open Standards for Data Interoperability appeared first on OASIS Open.


Identity At The Center - Podcast

We’ve got another great episode of the Identity at the Cente

We’ve got another great episode of the Identity at the Center podcast for you! We caught up with Eve Maler of Venn Factory to answer a few listener voicemail questions and to see if her thoughts on the difference between digital identity and identity and access management has changed since we last asked her almost two years ago. Episode #262 is available now at idacpodcast.com and in your favorit

We’ve got another great episode of the Identity at the Center podcast for you! We caught up with Eve Maler of Venn Factory to answer a few listener voicemail questions and to see if her thoughts on the difference between digital identity and identity and access management has changed since we last asked her almost two years ago.

Episode #262 is available now at idacpodcast.com and in your favorite podcast app.

#iam #podcast #idac


The Engine Room

Welcoming Dalia Othman as Co-Executive Director

Dalia Othman has been selected by our Board as The Engine Room’s other Co-Executive Director, to lead the organisation alongside Paola Mosso from mid-March The post Welcoming Dalia Othman as Co-Executive Director appeared first on The Engine Room.

Dalia Othman has been selected by our Board as The Engine Room’s other Co-Executive Director, to lead the organisation alongside Paola Mosso from mid-March

The post Welcoming Dalia Othman as Co-Executive Director appeared first on The Engine Room.

Wednesday, 04. October 2023

decentralized-id.com

Ecosystem Overview

This page includes a breakdown of the Web Standards, Protocols,Open Source Projects, Organizations, Companies, Regions, Government and Policy surrounding Verifiable Credentials and Self Sovereign Identity.

Note to reader This is a Work in Progress, and should not be taken as authoritative or comprehensive. Internal Links in Italic

Open Standards Decentralized Identifiers Explainer Literature DID Methods Supporting Tech DIDAuth Critique Verifiable Credentials Explainer Comparisons Varieties Data Integrity JSON-LD LD-Proof (w3c) JSON-LD ZKP BBS+ (w3c) JOSE / COSE JSON SD-JWT (ietf) JWP (ietf) ZKP-CL (Hyperledger) Related JSON-LD (W3C) JSON (IETF) BBS (SIAM 1986) Exchange Protocols DIDComm (DIF) CHAPI (DIF) OIDC4VC (OpenID) mDL (ISO/IEC) WACI-Pex (DIF) VC-HTTP-API (CCG) Authorization Protocols zCap (w3c) UCAN (Fission, Bluesky, Protocol Labs) GNAP (IETF) OAuth (IETF) ISO Standards mDL (ISO/IEC 18013-5) JTC 1/SC 17/WG 3 - Travel Documents (ISO/IEC) ISO 27001 Data Stores Encrypted Data Vaults - EDV (DIF) Decentralized Web Node - DWN (DIF) Trust Frameworks 800-63-3 (NIST) PCTF (DIACC) Non SSI Identity Standards OpenID (OpenID) FIDO (FIDO) OAuth (IETF) SCIM (IETF) SAML (OASIS) KMIP (OASIS) WebAuthN (W3C) Secure QR Code (OASIS) Blockchain Standards ISOTC 307 (ISO) CEN/CLC/JTC 19 (CEN/EENTLIC) ERC-EIP (Ethereum) Code-Bases Open Source Projects Universal Resolver (DIF) KERI (DIF) Other Tools & Libraries (DIF) ESSIF-Lab (ESSIF-Lab) Aires (Hyperledger) Indy (Hyperledger) Ursa (Hyperledger) Other Tools & Libraries (Hyperledger) Blockcerts (Hyland) Company Code Walt.id Verite SpruceID Organizations International Standard Development Organizations [SDO] W3C IETF OASIS ITU-T ISO/IEC National Government/Standard Setting Bodies NIST The Standards Council of Canada BSI - The Federal Office for Information Security, Germany Community Organizations W3C - CCG DIF ToIP ADIA Kantara MyData DIACC ID2020 OpenID Foundation Internet Safety Labs GLEIF Hyperledger Foundation FIDO Alliance OASIS SSI Networks DizmeID Sovrin BedRock ONT Velocity GlobalID Dock ITN , Mobi Companies Microsoft - Azure / Entra EU SSI Startups MyDex MeeCo ValidatedID Bloqzone Procivis Gataca US SSI Startups Dock Anonoyome GlobalID Hyland Magic IDRamp Indicio Verified Inc (formerly UNUMID) Animo Mattr Liquid Avatar Hedera IOTA Trinsic Transmute Spruce Disco.xyz Asia SSI Startups Affinidi ZADA Dhiway Ayanworks NewLogic Africa SSI Startups FlexID Diwala Acquisitions Avast-Evernym-SecureKey Analyst Firms KuppingerCole Forrester Gartner Consulting Firms Deloitte Accenture McKinsey BCG IAM Industry Ping (TomaBravo rollup) Okta Auth0 ForgeRock (TomaBravo rollup) IDENTOS SailPoint (TomaBravo rollup) Policies/Regulations (by region) FATF Europe Data Governance Act GDPR eIDAS1 eIDAS2 UK Data Protection Asia USA COPPA Privacy Act California SB786 India Canada Pan Canadian Trust Framework (PCTF) Government Initiatives US SVIP National Cybersecurity Strategy Germany IDUnion UK Scotland UK Digital Strategy EU eIDAS2 Large Scale Pilots Architecutre and Reference Framework EBSI ESSIF-Lab Catalonia Switzerland APAC New Zealand Australia Singapore South Korea Canada BCGov Alberta Ontario LatAm LACCHAIN Real-World Implementation Government Issued ID Passport eMRTD/DTC (ICAO) Immigraion (USCIS) mDL (US AAMVA) [not SSI standards conformant] IDCard (IATA / Switzerland) Trust Registries & Directories TRAIN (ToIP) Regi-Trust (UNDP) OrgBook BC (BCGov) SupplyChain/Trade GS1 GLEIF Banking Bonifi COVID NYState VCI CCI DTCC DIVOC Enterprise Healthcare Learning/Career/Education Jobs for the Future Velocity Network Learning Economy Foundation TLN - Trusted Learner Network KYC Real Estate Rental Travel Humanitarian Energy IoT Guardianship Wallets Types (by type+topic) Research Papers/Academic Literature Turing Institute Research: Privacy & Trust Events IIW RWoT Topics Biometrics Privacy Human Rights User Experience Business Critiques Future Web3, DWeb, & Other Tech (by focus) Web3 Web3 and SSI DAO Decentralization Metaverse NFT SBT DeFi Organizations Ethereum Enterprise Alliance* Fission Protocol Labs DWeb Secure Suttlebutt Bluesky Web5 Handshake Blockchain Ecosystems Bitcoin Ethereum

Friday, 23. February 2024

FIDO Alliance

Cybersecurity Policy Forum: Identity, Authentication and the Road Ahead

2023 demonstrated that we still have a lot of work to do when it comes to protecting Americans from identity theft and identity-related cybercrime. The GAO and FinCEN together documented […]

2023 demonstrated that we still have a lot of work to do when it comes to protecting Americans from identity theft and identity-related cybercrime. The GAO and FinCEN together documented more than $300 billion in identity-related cybercrime, DHS’ Cyber Safety Review Board (CSRB) outlined how weaknesses in legacy authentication tools enabled adversaries to launch a wave of high-profile attacks, and millions of Americans struggled to recover from identity theft. Meanwhile, the introduction of new tools powered by biometrics and AI to help block attacks also raised concerns about equity and bias, and in the physical world, many Americans still struggle to get foundational credentials that they need to prove who they are. As 2024 kicks off, these issues will all continue to be front and center.  

On Thursday, January 25th in Washington DC, the Better Identity Coalition, FIDO Alliance, and the Identity Theft Resource Center (ITRC) joined forces to present a full-day policy forum looking at “Identity, Authentication and the Road Ahead.”


Security Journal: Fingerprints agrees distribution partnership with Ansal Component

Fingerprints’ biometric access solution is designed for physical and logical access devices and applications such as smart locks, FIDO tokens, crypto wallets and more.

Fingerprints’ biometric access solution is designed for physical and logical access devices and applications such as smart locks, FIDO tokens, crypto wallets and more.


FinExtra: Mitigating fraud risk: effective strategies for small financial institutions

Passwords are one of the most common targets for fraudsters. Strengthening password security demands robust authentication methods, risk-based measures and behavioural analysis to detect anomalies. Active exploration of innovations like […]

Passwords are one of the most common targets for fraudsters. Strengthening password security demands robust authentication methods, risk-based measures and behavioural analysis to detect anomalies. Active exploration of innovations like Passwordless Login, based on the robust Fast Identity Online 2 (FIDO2) standards developed by the FIDO Alliance, is essential to bolster online security and authentication. 


Engadget: PlayStation now supports passkey sign-ins

Sony Interactive Entertainment (SIE) introduces passkey support for PlayStation accounts, allowing users to log in via their mobile device or computer’s screen unlocking method like PIN, fingerprint, or facial recognition. […]

Sony Interactive Entertainment (SIE) introduces passkey support for PlayStation accounts, allowing users to log in via their mobile device or computer’s screen unlocking method like PIN, fingerprint, or facial recognition. Passkeys enhance security by preventing reuse or sharing, reducing vulnerability to phishing and data breaches.


The Verge: Now you can sign into your PlayStation account without a password

Sony PlayStation has introduced passkey support for account logins, enabling users to authenticate without passwords. Similar to Nintendo’s implementation, users can now use authentication methods like iOS Face ID or […]

Sony PlayStation has introduced passkey support for account logins, enabling users to authenticate without passwords. Similar to Nintendo’s implementation, users can now use authentication methods like iOS Face ID or Android fingerprint sensors for account access.


Ceramic Network

Points: How Reputation & Tokens Collide

Points are here and they signal how networks and apps will evolve next with verifiable data.

Points have taken Web3 by storm in the last six months, catalyzed by projects like Blur and EigenLayer rewarding users with points on the way to seizing the NFT market and amassing $7 Billion TVL respectively. More than 115 Billion points have been given out by Web3 projects so far, according to Tim Copeland at The Block.

There are two ways to look at points:

As a precursor to an airdrop. Projects use points ahead of a token to generate interest, signal what they care about and will reward, more effectively target engagement, and navigate legal risks associated with tokens. As a measure of quantifiable reputation. Points ascribe a value to user activity, just like many reputation systems have before: traditional loyalty programs, Reddit karma, check-ins, credentials. They can signal legitimacy in pseudo-anonymous systems and — because they’re more quantitative than, for example, verifiable credentials — standing within the community.

Both of these are right. Points align the incentives of the platform and the user base, like all reputation systems. And they forecast who is creating value and is likely to be rewarded. By understanding how these two intersect, we can forecast where Web3 will go far beyond today’s points craze.

Points are quantifiable like money, enduring like reputation

Tokens were the first major innovation of Web3 and the primary incentive. They offer fully quantifiable value, are transactional, and require no additional context. They work as “one time games.” Reputation is how social systems achieve repeat game use cases, rewarding ‘good behavior’ of an actor (e.g. following contracts and policies, not cheating counterparties) with access and benefits over the long term. Reputations are non-fungible — for them to establish trust, it has to be hard to buy reputation.

Points are proof of activity that act as a building block for reputation (and in Web3, often carry the suggestion of future value). **All reputations come with some benefit. Usually, they’re more subtle than financial rewards. Reputations might gain access to a service (credit score) or club (referral), earn discounts (loyalty programs) or introductions (dating), convince counterparties to transact (Uber rating, credit lending), and build trust with customers (influencers, corporate brands). Reputations are less measurable than financial assets, but often more valuable.

For Web3 to grow into social and other non-financial use cases, more robust reputations are needed. Points are not an isolated mechanism to forecast token earnings — they are one point on a broad spectrum of token (financial) and reputational rewards that Web3 will keep innovating on.

Points as part of the evolution of Web3 reputation

Reputation naturally starts in the most discrete, high-value places and evolves to more broad ones.

1. B2C “Badges”

The earliest forms of reputation helped networks solve discrete pressing problems: anti-sybil and KYC. This involved a business or service issuing badges to users for achieving important milestones. For example, Gitcoin Passport stamps prevent sybil attacks in Grants rounds for Ethereum and other ecosystems. The meaning of the badge is objective and clearly denominated.

2. Attestations

After discrete badges, platforms needed reputation for a wider variety of activities and credentials: history of contribution, trust as a delegate, skills in a DAO, proof of activity. Attestations are still clearly denominated, but rather than signifying a clear milestone like badges they’re more continuous.

This also started B2C (for example, participating in an event, completing a certain step in a user flow, etc.) EAS has emerged as a standard for issuing these, used natively in the Optimism stack and widely in Ethereum. Increasingly, attestations are community-driven as well. Open platforms let users create and verify claims. For example, Metamask is working on a review and reputation system for Snaps.

3. Points: scored activity

On-chain transactions, B2C badges, and attestations are all cryptographically verifiable actions. Users do something that has provable provenance and time, whether that’s on-chain or off-chain signed data.

Point systems specify which of these activities have value in their system (and how much), tabulate them over time for each identity, and ascribe a numeric ‘point’ value to them. Points create a quantifiable reputation that continuously aggregates the previous forms.

4. Events: all activity

There’s no reason to believe the evolution will stop with points. At scale, all of a user’s activities in Web3 apps and platforms will be recorded as cryptographically verifiable events. This might be likes on a cast, messages on a forum, contributions to a codebase, visits to a blackbird restaurant, etc.

These are all events, and they all have value — but it’s not always known up-front what that value is. Some might have value in driving community engagement, others in improved analytics for products or targeting for ads, some as inputs into reputation systems. Because all will be cryptographically recorded, they can be referenced any time in the future.

Points will dominate for now, but before long we’ll see a huge increase in retroactive airdrops, activations, rewards, access, and other forms of value awarded to users based on a much broader history of their events — not just those that are made explicit up-front via point systems.

Infrastructure for points, events, and trust

All of these forms of reputation serve to reward users. Web2 used points exhaustively, but Web3 can uniquely do it openly and with composability. By making every event driving points both transparent and verifiable, events and points can be leveraged across platforms and have trust built in. This trust can reinforce future rewards, encourage more activity, and enable cross-platform point innovation.

Unfortunately, while points are proliferating, to date most haven’t tapped into this unique Web3 possibility — most have been tabulated on centralized databases, putting rewards at risk.

Data ledgers vs. asset ledgers

Financial blockchains were built to be asset ledgers, not point or event ledgers. They’re designed for scarcity; e.g., they must protect against double-spend as a core principle. Points are not bought, sold or traded like assets — they’re earned. They’re best served — fast, cheaply, scalably — on a data ledger.

Data ledgers, for data that is not scarce, operate with different principles. They must still offer strong verifiability and composability; but they don’t have to protect against double spend, and they must scale many orders of magnitude beyond asset ledgers. There are exponentially more data transactions than asset transactions in any web service.

Ceramic is the first decentralized data ledger with the scale, composability, and trust guarantees required to be a system of record for points and events. It’s built to enable the scaling of Web3 beyond financial transactions to richer experiences, including those powered by attestations, points, and the billions of events that are coming to enable a data-rich Web3.

Building with Points

If you’re thinking about a point system for your product, or how to advance point-enabled experiences, please reach out to partners@3box.io.

If you are interested in emerging standards for points, reach out to us on the Ceramic discord to learn more about our working group.

If you’ll be at EthDenver next week, come talk points, reputation and verifiable data with us at Proof of Data.

Thursday, 22. February 2024

The Engine Room

Community call: Dreams of a collective infrastructure for information ecosystems in Latin America 

Join our next community call to talk about the kinds of infrastructures we need to collectively create a better flow of creation, distribution and reception of information in Latin America.  The post Community call: Dreams of a collective infrastructure for information ecosystems in Latin America  appeared first on The Engine Room.

Join our next community call to talk about the kinds of infrastructures we need to collectively create a better flow of creation, distribution and reception of information in Latin America. 

The post Community call: Dreams of a collective infrastructure for information ecosystems in Latin America  appeared first on The Engine Room.


Ceramic Network

ETHDenver 2024: Where to find Ceramic in Denver

The core Ceramic team is coming to ETHDenver 2024! Check out all of the events where you can meet the team.

ETHDenver 2024 is kicking off, and we are very excited to meet you all there. This year, you will find the Ceramic team at a list of side events, talks, workshops, and, most importantly - a Proof of Data event co-organized by Ceramic and Tableland that you don’t want to miss.

Collect attendance points at ETHDenver 2024!

Are you ready for an exciting scavenger hunt at ETHDenver 2024?

Ceramic is partnering with Fluence, a decentralized computing marketplace, to create a fun and interactive game for collecting attendance badges at the majority of the events listed below. Those badges will allow you to collect points throughout the ETHDenver2024.

You can find us at each event and tap a disc to participate! With each attendance, you will claim points represented as documents on Ceramic. Fluence will be consuming the new Ceramic Data Feed API to enable compute over incoming points.

Rumor has it that the first participants to collect all the necessary points will be rewarded with some really cool prizes! So make sure to participate and we can’t wait to see you at all of the ETHDenver events listed below!

Sunday, February 25th - Saturday, March 2nd Silk ETHDenver hackerhouse

Ceramic is partnering with Silk and other ecosystem partners to invite hackers to work together on building better scientific tooling, web account UX, governance forums, and much more.

🚀 Calling all hackers! Unveiling the Silk ETH Denver Hacker House – where innovation meets decentralized tech! 🏡 Join our quest to revolutionize scientific tooling, web account UX, governance forums, and much more!

Are you ready? Save the dates: Feb 25th - March 2nd 🤍🧵

— Silk (@silkysignon) January 26, 2024
Tuesday, February 27th DePIN Day

Join us and our friends at Fluence for a day filled with talks, workshops, and discussions on all things #DePIN.

Location:
Green Spaces
2950 Walnut St.
Denver, CO

Time:
13:00 - 17:00 MST

Wednesday, February 28th Open Data Day

Our co-founder, Danny Zuckerman, will deliver a keynote at Open Data Day hosted by Chainbase. Come hear more about Ceramic and what we have coming up on the roadmap.

Location:
1261 Delaware St,
Denver, CO

Time:
13:00 - 17:00 MST

SciOS

Don’t miss out on a workshop led by Radek, our Developer Advocate at SciOS. The workshop will be focused on discussing the barriers and workarounds for enabling developers to build interoperable DeSci solutions.

Location:
2601 Walnut St 80205,
Denver, CO

Time:
13:00 - 16:00 MST

Thursday, February 29th libp2p day

Come listen to our core engineers discussing the implementation of Recon, a new Ceramic networking protocol that improves network scalability and data syncing efficiency.

Location:
The Slate Denver,
Denver, CO

Time:
13:30 - 18:00 MST

Friday, March 1st Proof of Data

Join Ceramic x Tableland on March 1 in Denver and livestream for Proof of Data Summit, a full-day community gathering on reputation, identity, DePIN, decentralized AI, and decentralized data computing. Featuring lightning talks, technical discussions, and panels with industry visionaries, this will be a can't miss event! Don't miss your chance to RSVP now to secure your spot in person or via livestream.

Location:
Denver Art Museum
Denver, CO

Time:
9:00 - 16:00 MST

ETHDenver - Triton Stage

Ceramic engineer Golda Velez will lead us on a talk about decentralized trust and AI on the Triton Stage at the Spork Castle!

Location:
Spork Castle
Denver, CO

Time:
14:45 MST

Keep an eye on the updates and our twitter as we get closer to the events. We can't wait to see you there!


DIF Blog

Guest blog: Tim Boeckmann, Mailchain

Mailchain, founded in 2021, aims to revolutionize decentralized identity and communication with its services, including Vidos and the Mailchain Communication Protocol. These offerings streamline management of, and interaction with, decentralized identifiers (DIDs), ensuring secure, efficient, and compliant operations across various industries, simplifying the integration and adoption of decentrali

Mailchain, founded in 2021, aims to revolutionize decentralized identity and communication with its services, including Vidos and the Mailchain Communication Protocol. These offerings streamline management of, and interaction with, decentralized identifiers (DIDs), ensuring secure, efficient, and compliant operations across various industries, simplifying the integration and adoption of decentralized identity technologies.

What is Mailchain, and how did it come into existence? 

I always had a passion for startups. In 2016 I joined the team at AWS (Amazon Web Services) that helps startups with technology and go-to-market strategy, for example by introducing them to other AWS customers. 

The blockchain landscape was evolving at the time and my soon-to-be co-founders (who I met at AWS) and I started tracking the space closely. We noticed that it wasn’t possible to communicate privately between blockchain addresses without providing an additional piece of information, like an email address. So we sent some encrypted messages with the transaction data as an experiment. 

This grew into a side project. It was open source and had quite a few contributors, but we realized we needed something more scalable that wasn't dependent on blockchain protocols, with the associated gas fees and speed constraints. 

So, in 2021 we set out to build Mailchain, a protocol that enables people to communicate privately using any blockchain address. 

With our SDK, developers can easily add web3 email to their own projects and applications, allowing them to engage with their users in a truly web3 way.

It’s an interesting strategy to focus on upgrading email with web3 capability. Why did you choose this route? 

There are over 3.9 billion active email users today. Each user’s inbox paints a rich picture of who they are. It stores their online actions, communication habits, spending behavior, even their thoughts, ideas, and feelings.  And everybody wants to keep that information private.

‍Web3 on the other hand is underpinned by the principles of decentralization, privacy and digital property rights, using wallets and blockchain addresses as identities. But there’s no native way to communicate privately using these addresses. The workaround is to use another communication channel, whether that’s email, instant messaging or social media.

With Mailchain, users enjoy the privacy and other benefits of a digital identity wallet without needing to leave their email inbox. For instance, people can authenticate with a Web3 application by clicking a link in their inbox. Upon clicking the link, the system creates a self-signed Verifiable Credential (VC). The app knows who should be signing it, and is able to verify the user. 

This use case came from a customer who needed to prevent Zoom-bombing (unauthorized intrusion into a private video conference call). Another use-case is universities selling remote courses. They don’t want people who are not enrolled joining the sessions, or others joining on behalf of those who are enrolled — particularly when it comes to exams. 

How did decentralized identity become part of the Mailchain story? 

We wanted to enable the community, so we open-sourced as much of the codebase as we could. 

We started to see people using Mailchain for authentication, and realized identity was vital to what they were trying to achieve. These developers needed tools to manage user identities. It was early in the adoption cycle and there were a lot of gaps. 

We also started hearing people talking about DIDs (Decentralized Identifiers) and VCs (Verifiable Credentials).  We saw a pattern between VCs and our previous work with NFTs. So, we went deep into the W3C standards and looked at how they were being used in the real world. 

At the time, we didn’t know if we wanted to put people’s Mailchain IDs on-chain. We were looking for a standard way to construct the IDs and convey related attributes, such as the authorized senders for a blockchain address.  

Over time, we saw an opportunity to converge on a standardized approach. We also wanted to extend what we built to help other developers in the ecosystem, so we created Vidos, a suite of managed services to help people building with DIDs and VC related applications. 

Tell us more about the tools you’re building, and how they promote adoption, and interoperability, of decentralized identities

Our first service is the Vidos Universal Resolver. DID resolution forms a core part of any interaction with VCs and needs to be reliable and performant. It’s also something that developers and ops teams shouldn’t need to spend time deploying and managing. The service takes away this burden so deploying a resolver is simple and just requires adding the API details to an application.

The service comes with uptime guarantees and offers granular policy and permissions features, helping organizations meet their own availability and compliance requirements. 

This helps organizations who are not just issuers (of credentials such as course certificates and educational qualifications). They may also need to verify other credentials (such as proof of identity, age, etc.), which potentially involves resolving DIDs on multiple networks and services. 

We also have other services coming later in the year that will facilitate credential verification with similar compliance and logging features. 

You mentioned go-to-market strategy as an area of personal interest. Can you tell us a bit about your own strategy? 

The DID resolution and Mailchain audiences are different. For Vidos, we’re working with enterprises and closing some gaps where technology is not available today. Mailchain is largely feature complete. 

Vidos is a good fit with Mailchain because there’s strong interest in enabling Web3 communication, whether that’s machine-to-machine messages triggered by blockchain transactions or certain types of business communication. 

We need to ground this in the real world, so developing SaaS (Software as a Service) products to move the entire ecosystem forward is what we think is most important right now. 

I’d like to think that building on W3C standards ensures we don’t get ruled out of any geographic  markets. The DID resolver is intended to be multi-region. Customers can already deploy into the UK and EU. We will stand up services elsewhere, as needed. 

What market signals do you see? 

The market never moves fast enough for an entrepreneur! But we’re seeing strong signs. It’s becoming a priority for enterprises to see how they can move beyond identity federation.  Regulatory change and fraud are also encouraging supply chain actors and financial institutions to look at how they can use decentralized identity. 

We’re seeing this pop up in different places, for example it’s good to see LinkedIn verifying humans on the platform. There are certainly tail winds. 

What is the value of DIF membership to Mailchain? 

We’re hoping to collaborate with industry participants, to make sure what we build is right for the use cases we’re targeting, starting with the Vidos Universal Resolver for DIDs, as well as to learn from others building in the space. 

We also want to contribute back to what’s a very useful and sensible set of standards, whether that’s ideas in the working groups and/or contributing packages or libraries.

It’s a great time to be involved in DIF. The standards are reaching a stage where they are mature enough. The opportunity is now!


MyData

MyData4Children ZINE 2024: A challenge for MyData Community – Design a Robot School

Introducing MyData4Children Zine 2024 Numerous studies and real-life events have shown us that emergent technologies affect children, for good and bad. However, the dominant narrative is framed with an individualistic focus, putting a single child or a person in a child’s circle of trust on the spot, leaving many of us feeling defeated, nervous, and […]
Introducing MyData4Children Zine 2024 Numerous studies and real-life events have shown us that emergent technologies affect children, for good and bad. However, the dominant narrative is framed with an individualistic focus, putting a single child or a person in a child’s circle of trust on the spot, leaving many of us feeling defeated, nervous, and […]

Wednesday, 21. February 2024

OpenID

OpenID Summit Tokyo 2024 and Celebrating 10 Years of OpenID Connect

OpenID Foundation Japan (OIDF-J) hosted the OpenID Summit Tokyo 2024 in Shibuya Tokyo on Friday, January 19, 2024 with over 250 in attendance. The OpenID Foundation (OIDF) was thrilled to be a part of the Summit that included contributors from Japan and abroad presenting on current digital identity, security, and digital wallet topics. Gail Hodges, […] The post OpenID Summit Tokyo 2024 and Celeb

OpenID Foundation Japan (OIDF-J) hosted the OpenID Summit Tokyo 2024 in Shibuya Tokyo on Friday, January 19, 2024 with over 250 in attendance. The OpenID Foundation (OIDF) was thrilled to be a part of the Summit that included contributors from Japan and abroad presenting on current digital identity, security, and digital wallet topics.

Gail Hodges, OIDF Executive Director, kicked the Summit off by presenting OIDF’s strategic outlook for 2024 as well as a detailed briefing on the Sustainable Interoperable Digital Identity (SIDI) Summit held in Paris in November 2023.

A highlight of the Summit was a panel discussion celebrating ten years of OpenID Connect. This panel was coordinated and moderated by longtime OIDF board member and OpenID Connect editor, Mike Jones. Panelists included OIDF Chairman, Nat Sakimura, longtime Connect contributor and evangelist, Nov Matake, and Ryo Ito, OIDF-J Evangelist. As Mike Jones noted in his blog, the panelists shared their experiences on what led to OpenID Connect, why it’s been successful, and lessons learned along the way. This was the first of three planned OpenID Connect celebrations in 2024 with the other two taking place at Identiverse in May and the European Identity and Cloud Conference in June.

Nat Sakimura concluded the OpenID Summit Tokyo 2024 by delivering the closing keynote.

The post OpenID Summit Tokyo 2024 and Celebrating 10 Years of OpenID Connect first appeared on OpenID Foundation.


Identity At The Center - Podcast

In our latest episode of the Identity at the Center podcast,

In our latest episode of the Identity at the Center podcast, we had the pleasure of welcoming Sara King and Raul Cepeda from rf IDEAS for a Sponsor Spotlight discussion. This episode, generously sponsored by rf IDEAS, dives deep into the realms of physical security and identity, highlighting the innovative solutions rf IDEAS brings to the table. We explored their unique market positioning, their

In our latest episode of the Identity at the Center podcast, we had the pleasure of welcoming Sara King and Raul Cepeda from rf IDEAS for a Sponsor Spotlight discussion.

This episode, generously sponsored by rf IDEAS, dives deep into the realms of physical security and identity, highlighting the innovative solutions rf IDEAS brings to the table. We explored their unique market positioning, their impactful presence in sectors like healthcare and manufacturing, and how they're leading the charge towards passwordless environments. Our conversation also touched on current industry trends, including the move to secure mobile credentials and the future of biometrics, capped off with insights into rf IDEAS' Reader Remote Management capabilities.

Tune in to this engaging episode to discover how rf IDEAS is bridging the gap between physical and logical security for a seamless authentication experience. It's an insightful discussion on the latest advancements in the field that you won't want to miss.

#iam #podcast #idac


DIDAS

DIDAS Statement for E-ID Technology Discussion Paper

In this latest contribution to the ongoing dialogue surrounding Switzerland's E-ID initiative, DIDAS has released a comprehensive document that critically evaluates the current technological proposals for the Swiss trust infrastructure. This document underscores DIDAS's commitment to a principle-based, collaborative methodology in developing a secure, adaptive E-ID ecosystem, echoing the neces
In this latest contribution to the ongoing dialogue surrounding Switzerland’s E-ID initiative, DIDAS has released a comprehensive document that critically evaluates the current technological proposals for the Swiss trust infrastructure. This document underscores DIDAS’s commitment to a principle-based, collaborative methodology in developing a secure, adaptive E-ID ecosystem, echoing the necessity for an approach that is both inclusive and forward-thinking.

It focuses on the existing scenarios’ technological shortcomings, and is proposing an ‘A+’ scenario that better aligns with EU standards, addresses aspects of privacy (specifically unlinkability and correlation) and fosters iterative development. This approach champions not only secure cryptographic practices but also advocates for the coexistence of various credential types, ensuring a flexible, future-proof infrastructure.

The imperative for cryptographically safe owner binding, a cornerstone for qualified digital identities are further aspects. The document elucidates the necessity for cryptographic primitives embedded directly within the secure elements of devices, particularly for high levels of assurance. This technical requirement is not merely a suggestion but a mandatory prerequisite to prevent any potential misuse or impersonation attempts. It confines of a device’s silicon is highlighted as a critical measure to prevent the unauthorized replication of private keys, ensuring that the sanctity of digital identities remains inviolable.

Furthermore, the document highlights the urgency of action, urging stakeholders to lead the way in establishing a continuously evolving, privacy-centric E-ID framework. It is also aimed at striking a balance between Swiss-specific requirements and EU interoperability, setting a precedent for digital identity management.

DIDAS’s insights into governance structures and the collaborative design of the trust infrastructure serves as a high level guide for policymakers, technologists, and industry stakeholders, emphasizing the collective responsibility in shaping a digital identity ecosystem that is secure, user-centric, adaptable by private sector businesses and aligned with broader societal values and international standards.

Download here: 2024-02 DIDAS E-ID Technology Discussion Paper Response Final  ​​

Next Level Supply Chain Podcast with GS1

Tackling Inventory Headaches in the E-commerce Universe

Tracking and managing inventory from end-to-end is challenging for business merchants dealing with perishable food items. Lichen Zhang, co-founder of Freshly Commerce, is changing how merchants handle the complexities of tracking bundles, managing perishable inventory, and the significance of complying with regulations such as FSMA 204. The company's initial success with its foundation, growth,

Tracking and managing inventory from end-to-end is challenging for business merchants dealing with perishable food items.

Lichen Zhang, co-founder of Freshly Commerce, is changing how merchants handle the complexities of tracking bundles, managing perishable inventory, and the significance of complying with regulations such as FSMA 204. The company's initial success with its foundation, growth, and innovative solutions for inventory and order fulfillment in the e-commerce industry spurred Freshly's evolution into a suite of tools helping e-commerce merchants manage their inventory and order fulfillment. 

The episode provides valuable insights into the evolution of e-commerce and the vital role of innovative solutions like Freshly Commerce in meeting the changing needs of the industry. Explore how Freshly Commerce addresses challenges faced by merchants, the importance of data sharing and collaboration, and the positive impacts of technological advancements on adapting to new conditions. Lichen also emphasizes the role of education and customer-centric strategies in Freshly Commerce's ongoing development and how Freshly Commerce started as a result of identifying a need in the market, leading them to participate in a Shopify app challenge where they secured third place.

 

Key takeaways: 

Accurate and timely inventory management in e-commerce is complex but necessary.

Addressing food safety and compliance with regulations helps prevent food waste and maximize profits.

Embracing technological advancements such as AI-driven tools can positively change some aspects of business operations.

 

Connect with GS1 US:

Our website - www.gs1us.org

GS1 US on LinkedIn

 

Connect with guest:

Lichen Zang on LinkedIn

Check out Freshly Commerce

 


Digital Identity NZ

Biometrics… ‘Perfect is the enemy of good?’ | February 2024 Newsletter

Kia ora e te whānau Biometrics hit the news again earlier this month. TV1’s 7 Sharp and 1News together with RNZ news, online and several printmedia carried the story of Foodstuffs North Island’s trial of Facial Recognition in 25 of its stores to see if it reduces retail crime. In addition, Māori, Pasifika and People of colour have&nb

Kia ora e te whānau

Biometrics hit the news again earlier this month. TV1’s 7 Sharp and 1News together with RNZ newsonline and several printmedia carried the story of Foodstuffs North Island’s trial of Facial Recognition in 25 of its stores to see if it reduces retail crime. In addition, Māori, Pasifika and People of colour have concerns of bias. Naturally the Office of Privacy Commissioner (OPC) is closely monitoring the trial. I have no special insight but from the links above I deduce that the trial stores run their CCTV feed through facial image matching software set at a high 90% threshold, matching it against that particular store’s database of known and convicted offenders. If a possible match is made, specially trained ‘super recognisers’ visually inspect both enrolled and detected images which in itself should eliminate racial bias while the rest of the feed is deleted.

Permanent deletion being not straightforward and the ‘no sharing’ rule between stores are matters that OPC likely monitors along with the trial’s effectiveness of reducing retail crime. While emerging anecdotal evidence overseas suggests its effectiveness, direct comparative research is needed.

CCTV and facial recognition are widely used for crime detection in public places, we are all using facial recognition every day on our phones, when we cross the border using Smart Gate, or when we use a browser on our PC, so you might ask why all the fuss? 

There are large notices in-store, it’s private property and people can choose to shop elsewhere. The additional use of image software in stores improves matching processes traditionally done by humans, albeit with potential human error. FR software and camera quality continuously improves while human-based matching has limitations. Perfection is challenging, but by combining human and technological efforts we can improve outcomes.

Foodstuffs North Island’s adherence to its rules raises the question of whether striving for perfection impedes progress. DINZ’s Biometrics Special Interest Group reflects on differing community views, agrees with the Deputy Police Commissioner on the need for an open discussion and emphasises the need for education on the technology’s workings and potential benefits when implemented correctly.

Help us provide much needed education and understanding in this domain.

Ngā mihi nui

Colin Wallis

DINZ Executive Director

Read the full news here: Biometrics… ‘Perfect is the enemy of good?’ | February 2024 Newsletter

SUBSCRIBE FOR MORE

The post Biometrics… ‘Perfect is the enemy of good?’ | February 2024 Newsletter appeared first on Digital Identity New Zealand.


FIDO Alliance

FIDO Alliance Announces Call for Speakers and Sponsors for FIDO APAC Summit 2024

February 21, 2024 The FIDO Alliance is excited to announce the return of the FIDO APAC Summit for its second year, building on the success of the 2023 event in […]

February 21, 2024

The FIDO Alliance is excited to announce the return of the FIDO APAC Summit for its second year, building on the success of the 2023 event in Vietnam. Scheduled to take place at the JW Marriott Kuala Lumpur, Malaysia, from September 10th to 11th, this premier event in the APAC region is dedicated to advancing phishing-resistant FIDO authentication – focusing on FIDO-based sign-ins with passkeys, and addressing IoT security and edge computing challenges with FIDO Device Onboarding (FDO).

Last year’s conference in Vietnam welcomed over 300 attendees and featured more than 20 sessions with engaging content alongside a sold-out exhibit area with over 20 industry-leading exhibitors and sponsors. The 2024 summit aims to build upon last year’s momentum with detailed case studies, technical tutorials, expert panels, and hands-on workshops. Sessions are designed to educate attendees on business drivers, technical considerations, and best practices for deploying modern authentication systems across web, enterprise and government applications. Additionally, attendees will benefit from a dynamic expo hall and engaging networking opportunities, set against the backdrop of downtown Kuala Lumpur’s natural beauty.

FIDO APAC Summit 2024 Call for Speakers

The FIDO Alliance invites thought leaders, industry experts, entrepreneurs, and academic professionals to submit speaking proposals to enrich the diverse FIDO APAC Summit 2024 program. Speakers with innovative ideas, implementation strategies, and successes in authentication and/or edge computing, from case studies to transformative projects, can submit proposals here. Selected speakers will join the ranks of top cybersecurity minds, influencing the community and promoting phishing-resistant authentication methods. Submit a proposal for an opportunity to shape cybersecurity’s future in the APAC region. Deadline for submissions is May 31, 2024. 

Sponsorship Opportunities at FIDO APAC Summit 2024

Join sponsors such as Samsung Electronics, SecureMetric, RSA, Thales, VinCSS, iProov, AirCuve, Zimperium, SmartDisplayer, and Utimaco and elevate your brand in the digital security landscape by sponsoring the FIDO APAC Summit 2024. This key event draws the cybersecurity community, offering sponsors a chance to interact with over 30 VIPs, speakers, and 300+ delegates, providing unparalleled brand visibility and thought leadership opportunities in the Asia-Pacific tech ecosystem. The summit is an ideal platform for sponsors eager to connect with an audience passionate about advanced passkeys and phishing-resistant authentication methods. Sponsoring this event places your brand at the forefront, engaging directly with professionals and policymakers driving the future of secure digital identities. Demonstrate your commitment to innovation and the development of secure, user-friendly digital ecosystems and influence the benchmark for authentication technologies by becoming a sponsor.

To become a sponsor, view the prospectus and complete the Sponsorship Request Form.

About FIDO Alliance

Formed in July 2012, the FIDO (Fast IDentity Online) Alliance aims to address the lack of interoperability among strong authentication technologies and the difficulties users face with creating and remembering multiple usernames and passwords. The FIDO Alliance is revolutionizing authentication with standards for simpler, stronger methods that reduce reliance on passwords. FIDO Authentication offers stronger, private, and easier use when authenticating to online services. For more information, visit www.fidoalliance.org.

Tuesday, 20. February 2024

DIF Blog

Full steam ahead for the Veramo User Group

The Veramo User Group has kicked into action with a well-attended and productive first meeting.  The meeting on 15th February provided context for the origins of the popular Javascript framework and its subsequent donation to DIF, and surfaced a range of questions, ideas, use cases and feedback from current

The Veramo User Group has kicked into action with a well-attended and productive first meeting. 

The meeting on 15th February provided context for the origins of the popular Javascript framework and its subsequent donation to DIF, and surfaced a range of questions, ideas, use cases and feedback from current and prospective users, plus success stories including an Enterprise solution built on top of Veramo, now in full production. 

After an initial round of introductions, the Veramo Labs team provided an overview of the project’s history and current status, and the goals of the User Group. 

Ideas shared by participants included a registry of plugins that is actively maintained by the community, additional development languages and a 12-month roadmap of planned new features. 

“The team’s goal has been from the beginning to grow the project by growing the community,” Veramo co-founder and group co-chair Mircea Nistor commented during the meeting. 

“Prior to donating Veramo to DIF, we had previously donated other libraries, and we saw more adoption and contributions happening on these libraries. We’d love to see the same kinds of activities happening within the Veramo User Group,” he added.

“To make this into a community project, it needs the original team to be no longer exclusively in charge. The Veramo framework and plugins already contain enough functionality to get people started in this space. We hope to see many new GitHub issues coming from the User Group, and for those with bandwidth to take them on” said Senior Engineer at Consensys Identity and group co-chair Nick Reynolds.

The plan is to start reviewing GitHub issues and Pull Requests (PRs) at next week’s meeting, which is at 09:00 PST / Noon EST / 18:00 CET on Thursday, 22 February. The User Group is open to all, the meeting link can be found in the DIF calendar.

In the meantime, interested parties are welcome to join the community on Discord. The new Veramo User Group channel on DIF’s Discord server is recommended for meeting follow-ups and agenda items, and the Veramo Labs Discord server is the place to head for specific technical questions.

The Veramo Labs team is hoping for the community to participate in leading the User Group within the next six months. 


OpenID

Registration Open for OpenID Foundation Hybrid Workshop at Google on Monday, April 15, 2024

Workshop Overview OpenID Foundation Workshops provide technical insight and influence on current digital identity standards while offering a collaborative platform to openly address current trends and market opportunities. This OpenID Foundation Workshop includes a number of presentations focused on 2024 Foundation strategic initiatives as well as updates on active working groups. Workshop Details

Workshop Overview

OpenID Foundation Workshops provide technical insight and influence on current digital identity standards while offering a collaborative platform to openly address current trends and market opportunities. This OpenID Foundation Workshop includes a number of presentations focused on 2024 Foundation strategic initiatives as well as updates on active working groups.


Workshop Details

Thank you kindly to Google for hosting this after lunch, hybrid workshop on Monday, April 15, 2024 12:30-4pm PT:

Google 
242 Humboldt Ct
Humboldt 1
Sunnyvale, CA 94089

This is an after-lunch workshop with beverages and snacks provided to those attending in person. The Foundation’s Note Well Statement can be found here and is used to govern workshops.


Agenda

TIME

TOPIC

PRESENTERS

12:30-12:35pm

Welcome

Nat Sakimura

12:35-12:50pm

eKYC & IDA WG Update

Mark Haine

12:50-1:05pm

AuthZEN WG Update

David Brossard & Omri Gazitt

1:05-1:20pm

AB/Connect WG Update

Michael Jones

1:20-1:35pm

FAPI WG Update

Nat Sakimura

1:35-1:50pm

MODRNA WG Update

Bjorn Hjelm

1:50-2:05pm

DCP WG Update

Kristina Yasuda & Joseph Heenan

2:05-2:20pm

Shared Signals WG Update

Tim Cappalli

2:20-2:30

BREAK

 

2:30-2:40

OIDF Certification Program Update + Roadmap

Joseph Heenan

2:40-2:55pm

Death & the Digital Estate Community Group

Dean Saxe

2:55-3:05pm

Sustainable & Interoperable Digital Identity Hub Update

Gail Hodges

3:05-3:30pm

Listening Session: Post-Quantum Computing & Identity. What are your concerns? What is OIDF’s role?

Gail Hodges, Nancy Cam-Winget, John Bradley, Rick Byers  

3:30-3:55pm

Listening Session: AI & Identity. What are your concerns? What is OIDF’s role?

Nancy Cam-Winget, Kaelig Deloumeau-Prigent, Mike Kiser, Geraint Rogers   

3:55-4:00pm

Closing Remarks

Nat Sakimura

The post Registration Open for OpenID Foundation Hybrid Workshop at Google on Monday, April 15, 2024 first appeared on OpenID Foundation.


Content Authenticity Initiative

February 2024 | This Month in Generative AI: Election Season

From AI resurrected dictators to AI powered interactive chatbots, political campaigns around the world are deploying the technology to expand their audience and win over voters. This month, Hany Farid, UC Berkeley Professor, CAI Advisor, looks at examples of increasingly easier to combine fake audio with video, its clear effect on the electorate, and existing solutions to authenticating digita

Adobe Stock

by Hany Farid, UC Berkeley Professor, CAI Advisor

News and trends shaping our understanding of generative AI technology and its applications.

In May of 2019, a manipulated video of House Speaker Nancy Pelosi purportedly slurring her words in a public speech racked up over 2.5 million views on Facebook. Although the video was widely reported to be a deepfake, it was what we would today call a “cheap fake.” The original video of Speaker Pelosi was simply slowed down to make her sound inebriated — no AI needed. The cheap fake was, however, a harbinger.

Around 2 billion citizens will vote this year in some 70 elections around the globe. At the same time, generative AI has emerged as a powerful technology that can entertain, defraud, and deceive.

Today, nearly anyone can use generative AI to create hyper-realistic images from only a text prompt, clone a person's voice from a 30-second recording, or modify a video to make the speaker say things they never did or would say. Perhaps not surprisingly, generative AI is finding its way into everything from local to national and international politics. Some of these applications are used to bolster a candidate, but many are designed to be harmful to a candidate or party, and all applications raise new and complex questions.

Trying to help

In October of last year, New York City Mayor Eric Adams used generative AI to make robocalls in which he spoke Mandarin and Yiddish. (Adams only speaks English.) The calls did not disclose that the voice was AI-generated, and at least some New Yorkers believe that Adams is multilingual: "People stop me on the street all the time and say, ‘I didn’t know you speak Mandarin,’" Adams said. While the content of the calls was not deceptive, some claimed that the calls themselves were deceptive and an unethical use of AI.

Not to be outdone, earlier this year Representative Dean Phillips deployed a full-blown OpenAI-powered interactive chatbot to bolster his long-shot bid for the Democratic nomination in the upcoming presidential primary. The chatbot disclosed that it was an AI-bot and allowed voters to ask questions and hear an AI-generated response in an AI-generated version of Phillips's voice. Because this bot violated OpenAI's terms of service, it was eventually taken offline.

Trying to harm

In October of last year, Slovakia — a country that shares part of its eastern border with Ukraine — saw a last-minute and dramatic shift in its presidential election. Just 48 hours before election day, the pro-NATO and Western-aligned candidate Michal Šimečka was leading in the polls by some four points. A fake audio of Šimečka seeming to claim that he was going to rig the election spread quickly online, and two days later the pro-Moscow candidate Robert Fico won the presidential election by five points. It is impossible to say exactly how much the audio impacted the election outcome, but this incident raised concerns about the use of AI in campaigns.

Fast-forward to January of this month when the state of New Hampshire was holding the nation's first primary for the 2024 US presidential election. On the eve of the primary, more than 20,000 New Hampshire residents received robocalls impersonating President Biden. The call urged voters not to vote in the primary and to "save your vote for the November election." It took two weeks before New Hampshire’s Attorney General announced that his office identified two businesses behind these robocalls. 

The past few months have also seen an increasing number of viral images making the rounds on social media. These range from faked images of Trump with convicted child sex trafficker Jeffrey Epstein and a young girl, to faked images of Biden in military fatigues on the verge of authorizing military strikes. 

On the video front, it is becoming increasingly easier to combine fake audio with video to make people say and do things they never did. For example, a speech originally given by Vice President Harris on April 25, 2023, at Howard University was digitally altered to replace the voice track with a seemingly inebriated and rambling Harris.

And these are just a few examples of the politically motivated deepfakes that we have already started to see as the US national election heats up. In the coming months, I'll be keeping track of these examples as they continue to emerge.

Something in between

In the lead up to their election earlier in February, a once-feared army general, who ruled Indonesia with an iron fist for more than three decades, was AI resurrected with a message for voters. And, in India, former Dravida Munnetra Kazhagam – deceased since 2018 – was AI resurrected with an endorsement for his son, the sitting head of the state of Bengaluru. I expect this type of virtual endorsement will become an (ethically complex) trend.

Looking ahead

There are two primary approaches to authenticating digital media. Reactive techniques analyze various aspects of an image or video for traces of implausible or inconsistent properties. Learn more about these photo forensics techniques in my series for the CAI. Proactive techniques, on the other hand, operate at the source of content creation, embedding into or extracting from an image or video an identifying digital watermark or signature. 

Although not perfect, these combined reactive and proactive technologies will make it harder (but not impossible) to create a compelling fake and easier to verify the integrity of real content. The creation and detection of manipulated media, however, is inherently adversarial. Both sides will continually adapt, making distinguishing the real from the fake an ongoing challenge.

While it is relatively straightforward to regulate AI-powered non-consensual sexual imagery, child abuse imagery, and content designed to defraud, regulating political speech is more fraught. We, of course, want to give a wide berth for political discourse, but there should be limits on activities like those we saw in New Hampshire, where bad actors attempt to interfere with our voting rights. 

As a first step, following the New Hampshire AI-powered robocalls, the Federal Communications Commission quickly announced a ban on AI-powered robocalls. While the ruling is fairly narrow and doesn't address the wider issue of AI-powered election interference or non-AI-powered interference, it is a reasonable precaution as we all try to sort out this brave new world where anybody's voice or likeness can be manipulated.

As we continue to wrestle with these complex questions, we as consumers have to be particularly vigilant as we enter what is sure to be a highly contentious election season. We should be vigilant not to fall for disinformation just because it conforms to our personal views, we should be vigilant not to be part of the problem by spreading disinformation, and we should be vigilant to protect our and others' rights (even if we disagree with them) to participate in our democracy.

Subscribe to the CAI newsletter to receive ecosystem news.

Stay connected and consider joining the movement to restore trust and transparency online.

Author bio: Professor Hany Farid is a world-renowned expert in the field of misinformation, disinformation, and digital forensics. He joined the Content Authenticity Initiative (CAI) as an advisor in June 2023. The CAI is an Adobe-led community of media and tech companies, NGOs, academics, and others working to promote adoption of the open industry standard for content authenticity and provenance.

Professor Farid teaches at the University of California, Berkeley, with a joint appointment in electrical engineering and computer sciences at the School of Information. He’s also a member of the Berkeley Artificial Intelligence Lab, Berkeley Institute for Data Science, Center for Innovation in Vision and Optics, Development Engineering Program, and Vision Science Program, and he’s a senior faculty advisor for the Center for Long-Term Cybersecurity. His research focuses on digital forensics, forensic science, misinformation, image analysis, and human perception.

He received his undergraduate degree in computer science and applied mathematics from the University of Rochester in 1989, his M.S. in computer science from SUNY Albany, and his Ph.D. in computer science from the University of Pennsylvania in 1997. Following a two-year post-doctoral fellowship in brain and cognitive sciences at MIT, he joined the faculty at Dartmouth College in 1999 where he remained until 2019.

Professor Farid is the recipient of an Alfred P. Sloan Fellowship and a John Simon Guggenheim Fellowship, and he’s a fellow of the National Academy of Inventors.


FIDO Alliance

Intelligent Health.Tech: Site security: Passwordless fingerprint authentication

Thales has announced the SafeNet IDPrime FIDO Bio Smart Card – a security key that enables strong multi-factor authentication (MFA) for the enterprise. This new contactless smart card allows users […]

Thales has announced the SafeNet IDPrime FIDO Bio Smart Card – a security key that enables strong multi-factor authentication (MFA) for the enterprise. This new contactless smart card allows users to access enterprise devices, applications and cloud services using a fingerprint instead of a password. 


StateTech Magazine: How Passwordless Authentication Supports Zero Trust

Utilizing FIDO passkeys addresses security risks associated with password-based systems which often lead to account takeovers, data breaches and even stolen identities. While password managers and legacy forms of two-factor authentication […]

Utilizing FIDO passkeys addresses security risks associated with password-based systems which often lead to account takeovers, data breaches and even stolen identities. While password managers and legacy forms of two-factor authentication offer incremental improvements, there has been industry-wide collaboration to create passkey sign-in technology that is more convenient and more secure.


TechTarget: How passwordless helps guard against AI-enhanced attacks

In the age of generative AI, phishing scams (which already account for 90% of data breaches according to CISA) are becoming increasingly persuasive and humanlike. To mitigate these evolving threats, […]

In the age of generative AI, phishing scams (which already account for 90% of data breaches according to CISA) are becoming increasingly persuasive and humanlike. To mitigate these evolving threats, organizations should prioritize transitioning to passkeys, a phishing-resistant alternative backed by industry giants like Google, Apple, Amazon, and Microsoft, to enhance both security and usability.


The Wall Street Journal: Forget Passwords and Badges: Your Body Is Your Next Security Key

Andrew Shikiar, executive director of the FIDO Alliance, emphasizes the importance of biometric scans as hacking attempts and other cyber threats have become more sophisticated.

Andrew Shikiar, executive director of the FIDO Alliance, emphasizes the importance of biometric scans as hacking attempts and other cyber threats have become more sophisticated.


Velocity Network

Live event with Randstad and Rabobank

On March 19th, discover how the Dutch Banking industry is using verifiable credentials to accelerate the shift to a skills-based economy. The post Live event with Randstad and Rabobank appeared first on Velocity.

The post Live event with Randstad and Rabobank appeared first on Velocity.


MyData

Open Position: Finance and admin officer (50% FTE)

Job title:  Finance and admin officerEmployment type:  50% employment contractContract duration:  Permanent with a 6-month trial periodSalary range: 1,100 € – 1,250 €  (2,200 € – 2,500 € FTE)Location: Finland, with a preference for HelsinkiReports to: Executive Director Role description   The Finance and Administration Officer is responsible for monitoring and imp
Job title:  Finance and admin officerEmployment type:  50% employment contractContract duration:  Permanent with a 6-month trial periodSalary range: 1,100 € – 1,250 €  (2,200 € – 2,500 € FTE)Location: Finland, with a preference for HelsinkiReports to: Executive Director Role description   The Finance and Administration Officer is responsible for monitoring and implementing financial operations, setting up and […]

Monday, 19. February 2024

Identity At The Center - Podcast

In our latest episode of The Identity at the Center Podcast,

In our latest episode of The Identity at the Center Podcast, we dive into a conversation with Daniel Grube about TikTok's adoption of FIDO technology to enhance security. Daniel shares insights into the seamless integration of this technology for both enterprise and user benefits, emphasizing the importance of user education and phased technology rollouts. We also explore the lighter side with a d

In our latest episode of The Identity at the Center Podcast, we dive into a conversation with Daniel Grube about TikTok's adoption of FIDO technology to enhance security. Daniel shares insights into the seamless integration of this technology for both enterprise and user benefits, emphasizing the importance of user education and phased technology rollouts. We also explore the lighter side with a debate on airplane seating preferences. Listen to this enlightening discussion at idacpodcast.com or wherever you download your podcasts.

#iam #podcast #idac


GS1

e-CMR in GS1 Belgium

e-CMR in GS1 Belgium In April 2022, GS1 Belgium & Luxembourg launched a pilot project on e-CMR together with 7 companies, amongst which AB InBev. The goal of this pilot project is to optimise the digitalisation of transport with e-CMR and to define standards that everyone can use. One year later, we have already some
e-CMR in GS1 Belgium In April 2022, GS1 Belgium & Luxembourg launched a pilot project on e-CMR together with 7 companies, amongst which AB InBev.

The goal of this pilot project is to optimise the digitalisation of transport with e-CMR and to define standards that everyone can use. One year later, we have already some great insights and asked Andreea Calin from AB InBev to share her findings on e-CMR and our pilot project.

See more on GS1 Belgilux's article


Paperless – GS1 Poland (in Polish)

Paperless – GS1 Poland (in Polish) paperless_logistyka_bez_papieru_taniej_szybciej_bezpieczniej.pdf

E-CMR in Colian Logistic – GS1 Poland (in Polish)

E-CMR in Colian Logistic – GS1 Poland (in Polish) bc_ecmr.pdf
E-CMR in Colian Logistic – GS1 Poland (in Polish) bc_ecmr.pdf

Friday, 16. February 2024

Oasis Open Projects

Approved Errata for Common Security Advisory Framework v2.0 published

Update to the definitive reference for the CSAF language now available. The post Approved Errata for Common Security Advisory Framework v2.0 published appeared first on OASIS Open.

CSAF Aggregator schema updated

OASIS and the OASIS Common Security Advisory Framework (CSAF) TC [1] are pleased to announce the approval and publication of Common Security Advisory Framework Version 2.0 Errata 01.

This document lists the approved errata for the OASIS Standard “Common Security Advisory Framework Version 2.0.” The specific changes are listed in section 1.1, at https://docs.oasis-open.org/csaf/csaf/v2.0/errata01/os/csaf-v2.0-errata01-os.html#11-description-of-changes.

The Common Security Advisory Framework (CSAF) Version 2.0 is the definitive reference for the CSAF language which supports creation, update, and interoperable exchange of security advisories as structured information on products, vulnerabilities and the status of impact and remediation among interested parties.

The OASIS CSAF Technical Committee is chartered to make a major revision to the widely-adopted Common Vulnerability Reporting Framework (CVRF) specification, originally developed by the Industry Consortium for Advancement of Security on the Internet (ICASI). ICASI has contributed CVRF to the CSAF TC. The revision is being developed under the name Common Security Advisory Framework (CSAF). TC deliverables are designed to standardize existing practice in structured machine-readable vulnerability-related advisories and further refine those standards over time.

The documents and related files are available here:

Common Security Advisory Framework Version 2.0 Errata 01
OASIS Approved Errata
26 January 2024

Editable source (Authoritative):
https://docs.oasis-open.org/csaf/csaf/v2.0/errata01/os/csaf-v2.0-errata01-os.md

HTML:
https://docs.oasis-open.org/csaf/csaf/v2.0/errata01/os/csaf-v2.0-errata01-os.html

PDF:
https://docs.oasis-open.org/csaf/csaf/v2.0/errata01/os/csaf-v2.0-errata01-os.pdf

JSON schemas:
Aggregator JSON schema:
https://docs.oasis-open.org/csaf/csaf/v2.0/errata01/os/schemas/aggregator_json_schema.json
CSAF JSON schema:
https://docs.oasis-open.org/csaf/csaf/v2.0/errata01/os/schemas/csaf_json_schema.json
Provider JSON schema:
https://docs.oasis-open.org/csaf/csaf/v2.0/errata01/os/schemas/provider_json_schema.json

For your convenience, OASIS provides a complete package of the specification document and any related files in ZIP distribution files. You can download the ZIP file at:
https://docs.oasis-open.org/csaf/csaf/v2.0/errata01/os/csaf-v2.0-errata01-os.zip

Members of the CSAF TC [1] approved the publication of these Errata by Full Majority Vote [2]. The Errata had been released for public review as required by the TC Process [3]. The Approved Errata are now available online in the OASIS Library as referenced above.

Our congratulations to the CSAF TC on achieving this milestone.

========== Additional references:
[1] OASIS Common Security Advisory Framework (CSAF) TC
https://www.oasis-open.org/committees/csaf/

[2] https://lists.oasis-open.org/archives/csaf/202402/msg00001.html

[3] Public review:
– 15-day public review, 20 December 2023: https://lists.oasis-open.org/archives/members/202312/msg00005.html
– Comment resolution log: https://docs.oasis-open.org/csaf/csaf/v2.0/errata01/csd01/csaf-v2.0-errata01-csd01-comment-resolution-log.txt

The post Approved Errata for Common Security Advisory Framework v2.0 published appeared first on OASIS Open.


FIDO Alliance

White Paper: Addressing FIDO Alliance’s Technologies in Post Quantum World

There has been considerable press, a number of papers, and several formal initiatives concerned with quantum computing’s impact on cryptographic algorithms and protocols. Most standards development organizations are addressing concerns […]

There has been considerable press, a number of papers, and several formal initiatives concerned with quantum computing’s impact on cryptographic algorithms and protocols. Most standards development organizations are addressing concerns about the impact on the security of the currently deployed cryptographic algorithms and protocols. This paper presents FIDO Alliance initiatives that address the impact of quantum computing on the Alliance’s specifications and how the FIDO Alliance is working to retain the long-term value provided by products and services based on the FIDO Alliance specifications. 

This paper is directed to those who have or are considering FIDO-enabled products and solutions but have concerns about the impact of Quantum Computing on their business. This paper will focus, from a high-level approach, on the FIDO Alliance’s acknowledgment of issues related to Quantum Computing and explain how the FIDO Alliance is taking appropriate steps to provide a seamless transition from the current cryptographic algorithms and protocols to new PQC (or quantum-safe) algorithms in a timely manner.

For any questions or comments, please contact feedback@fidoalliance.org.

Wednesday, 14. February 2024

FIDO Alliance

Webinar: Next-Gen Authentication: Implementing Passkeys for your Digital Services

Despite their shortcomings, passwords have been a necessary evil; an unavoidable reality. No wonder online services have struggled to get rid of passwords for nearly three decades. Not anymore! Passkeys […]

Despite their shortcomings, passwords have been a necessary evil; an unavoidable reality. No wonder online services have struggled to get rid of passwords for nearly three decades. Not anymore! Passkeys have emerged as a modern form of authentication, offering a superior user experience and higher security. With over 8 billion accounts already protected by passkeys, the question for service providers isn’t “if” they should be adopting passkeys, rather “when” and “how”. 

During the webinar attendees were able to: 

Learn why this standards-based approach is gaining such rapid traction  Understand how ready your end users are for adopting passkeys Get actionable guidance to roll out passkeys for both low and high assurance authentication Understand why you should introduce passkeys for your digital services right away

Tuesday, 13. February 2024

Oasis Open Projects

The DocBook Schema Version 5.2 OASIS Standard published

DocBook Version 5.2 continues the evolution of the DocBook XML schema. The post The DocBook Schema Version 5.2 OASIS Standard published appeared first on OASIS Open.

DocBook continues its evolution - over 25 years since origin

OASIS is pleased to announce the publication of its newest OASIS Standard, approved by the members on 06 February 2024:

The DocBook Schema Version 5.2
OASIS Standard
06 February 2024

Overview:

Almost all computer hardware and software developed around the world needs some documentation. For the most part, this documentation has a similar structure and a large core of common idioms. The community benefits from having a standard, open, interchangeable vocabulary in which to write this documentation. DocBook has been, and will continue to be, designed to satisfy this requirement. For more than 25 years, DocBook has provided a structured markup vocabulary for just this purpose. DocBook Version 5.2 continues the evolution of the DocBook XML schema.

The prose specifications and related files are available here:

The DocBook Schema Version 5.2

Editable source (Authoritative):
https://docs.oasis-open.org/docbook/docbook/v5.2/os/docbook-v5.2-os.docx

HTML:
https://docs.oasis-open.org/docbook/docbook/v5.2/os/docbook-v5.2-os.html

PDF:
https://docs.oasis-open.org/docbook/docbook/v5.2/os/docbook-v5.2-os.pdf

Schemas:
Relax NG schemas: https://docs.oasis-open.org/docbook/docbook/v5.2/os/rng/
Schematron schemas: https://docs.oasis-open.org/docbook/docbook/v5.2/os/sch/
XML catalog: https://docs.oasis-open.org/docbook/docbook/v5.2/os/catalog.xml
NVDL schemas: https://docs.oasis-open.org/docbook/docbook/v5.2/os/

Distribution ZIP file

For your convenience, OASIS provides a complete package of the prose specification and related files in a ZIP distribution file. You can download the ZIP file here:

https://docs.oasis-open.org/docbook/docbook/v5.2/os/docbook-v5.2-os.zip

Our congratulations to the members of the OASIS DocBook Technical Committee on achieving this milestone.

The post The DocBook Schema Version 5.2 OASIS Standard published appeared first on OASIS Open.

Monday, 12. February 2024

Identity At The Center - Podcast

It’s time for the next exciting episode of the Identity at t

It’s time for the next exciting episode of the Identity at the Center Podcast! We dive into the fascinating world of Security Operations Centers (SOCs) and their crucial role in identity security. In this episode, we had the privilege of hosting two experts from RSM's Managed Security Practice, Steve Kane and Todd Willoughby. Their insights and expertise shed light on the role of SOCs in identit

It’s time for the next exciting episode of the Identity at the Center Podcast! We dive into the fascinating world of Security Operations Centers (SOCs) and their crucial role in identity security.

In this episode, we had the privilege of hosting two experts from RSM's Managed Security Practice, Steve Kane and Todd Willoughby. Their insights and expertise shed light on the role of SOCs in identity security, evolving threats, and the importance of identity data within SOCs. We also explore the decision-making process between building your own SOC or outsourcing.

Listen to the full episode on idacpodcast.com or in your favorite podcast app and gain valuable insights into the anatomy of a breach, actions taken by SOCs to prevent attacks, and the tactics and techniques used by threat actors to avoid detection.

#iam #podcast #idac

Friday, 09. February 2024

FIDO Alliance

WIRED: I Stopped Using Passwords. It’s Great—and a Total Mess

More than 8 billion online accounts can set up passkeys right now, says Andrew Shikiar, the chief executive of the FIDO Alliance, an industry body that has developed the passkey […]

More than 8 billion online accounts can set up passkeys right now, says Andrew Shikiar, the chief executive of the FIDO Alliance, an industry body that has developed the passkey over the past decade. So, I decided to kill my passwords.


TechRound: Top 10 UK Business Cybersecurity Providers

Intercede’s solutions offer maximum protection against data breaches, focusing on:

Intercede’s solutions offer maximum protection against data breaches, focusing on:

Digital Identity Management for citizens, the workforce, and supply chains Compliance adherence Technological solutions such as FIDO, Digital ID Registration, Mobile Authentication, and PKI for robust identity and credential management

International Security Journal: The role of MFA in the fight against phishing

Based on FIDO Alliance and W3C standards, passkeys replace passwords with cryptographic key pairs.This requires the user to further authenticate themselves off-site using either soft or hardware-bound solutions.

Based on FIDO Alliance and W3C standards, passkeys replace passwords with cryptographic key pairs.This requires the user to further authenticate themselves off-site using either soft or hardware-bound solutions.


国际安全期刊》:MFA 在打击网络钓鱼中的作用


Gear Patrol: Want a Faster, More Secure Way of Logging into X on Your iPhone? Use a Passkey

X (formerly Twitter) has introduced passkeys for iPhone users as an alternative to traditional passwords. Passkeys offer heightened security through its inherent two-step authentication system and are generated by the […]

X (formerly Twitter) has introduced passkeys for iPhone users as an alternative to traditional passwords. Passkeys offer heightened security through its inherent two-step authentication system and are generated by the device along with the X account, making it less vulnerable to phishing and unauthorized access.


Velocity Network

NSC’s Chris Goodson joins Velocity’s board

We're delighted that National Student Clearinghouse's Chris Goodson has been voted onto the Velocity Network Foundation Board of Directors. The post NSC’s Chris Goodson joins Velocity’s board appeared first on Velocity.

Thursday, 08. February 2024

OpenID

Public Review Period for Proposed Implementer’s Draft of OpenID for Verifiable Credential Issuance

The OpenID Connect Working Group recommends approval of the following specification as an OpenID Implementer’s Draft: OpenID for Verifiable Credential Issuance 1.0 This would be the first Implementer’s Draft of this specification. An Implementer’s Draft is a stable version of a specification providing intellectual property protections to implementers of the specification. This note starts the […]

The OpenID Connect Working Group recommends approval of the following specification as an OpenID Implementer’s Draft:

OpenID for Verifiable Credential Issuance 1.0

This would be the first Implementer’s Draft of this specification.

An Implementer’s Draft is a stable version of a specification providing intellectual property protections to implementers of the specification. This note starts the 45-day public review period for the specification draft in accordance with the OpenID Foundation IPR policies and procedures. Unless issues are identified during the review that the working group believes must be addressed by revising the draft, this review period will be followed by a seven-day voting period during which OpenID Foundation members will vote on whether to approve this draft as an OpenID Implementer’s Draft. For the convenience of members who have completed their reviews by then, voting will actually begin a week before the start of the official voting period.

The relevant dates are:

Implementer’s Draft public review period: Thursday, February 8, 2024 to Sunday, March 24, 2024 (45 days) Implementer’s Draft vote announcement: Monday, March 11, 2024 Implementer’s Draft early voting opens: Monday, March 18, 2024* Implementer’s Draft official voting period: Monday, March 25, 2024 to Monday, April 1, 2024 (7 days)*

* Note: Early voting before the start of the formal voting period will be allowed.

The OpenID Connect working group page is https://openid.net/wg/connect/. Information on joining the OpenID Foundation can be found at https://openid.net/foundation/members/registration. If you’re not a current OpenID Foundation member, please consider joining to participate in the approval vote.

You can send feedback on the specification in a way that enables the working group to act upon it by (1) signing the contribution agreement at https://openid.net/intellectual-property/ to join the working group (please specify that you are joining the “AB/Connect” working group on your contribution agreement), (2) joining the working group mailing list at https://lists.openid.net/mailman/listinfo/openid-specs-ab, and (3) sending your feedback to the list.

— Michael B. Jones

The post Public Review Period for Proposed Implementer’s Draft of OpenID for Verifiable Credential Issuance first appeared on OpenID Foundation.


We Are Open co-op

Pathways to Change

How to run an impactful workshop using our free template Recently, we ran a Theory of Change workshop for the team at the Digital Credentials Consortium, which is hosted by MIT. We’ve found that organisations and projects that are looking to create big impact can benefit from this way of seeing into the future. Theory of Change (ToC) is a methodology or a criterion for planning, partic
How to run an impactful workshop using our free template

Recently, we ran a Theory of Change workshop for the team at the Digital Credentials Consortium, which is hosted by MIT. We’ve found that organisations and projects that are looking to create big impact can benefit from this way of seeing into the future.

Theory of Change (ToC) is a methodology or a criterion for planning, participation, adaptive management, and evaluation that is used in companies, philanthropy, not-for-profit, international development, research, and government sectors to promote social change. (Wikipedia)

In this post we’ll outline what this kind of session aims to achieve, share a template which you can re-use, and explain how to make best use of it.

We’ve become pretty good at running these kinds of workshops for all kinds of clients, large and small, and find them particularly useful in charting a course for collaborative working. Thanks goes to Outlandish for introducing this approach to us!

ToC workshop template by WAO available under a Creative Commons Attribution 4.0 International license

Note: around seven people is ideal for this kind of workshop. We run this workshop remotely, but there’s no reason why it couldn’t be done in person.

🥝 At the Core

The template has several sections to it, but at the core is the triangle of Final goal, Outcomes, and Activities. You work through these in turn, first defining the goal, moving onto outcomes to support that goal, and then activities which lead to the outcomes.

The core of the ToC approach

One of the first things to figure out as a team is the timeframe for the work you are doing together. In terms of the final goal, is that to be achieved in six months? a year? 18 months? three years?

Write that down on the sticky note at the top left-hand corner just to remind everyone.

⛏️ Breaking down the goal

The final goal can be difficult to write, so we’ve broken it down into three sections to make it easier for participants:

Before asking people to contribute ideas, we run through some examples from our own experience. The first row relates to work over around six months, the second over about 18 months, and the third over about three years.

Next, we use the section to the right hand side where each individual participant can take some time to write down what they think the organisation does (or should) do, to influence their stakeholders, to have the desired impact in the world.

They can approach these boxes in any order — for example, some people find it easier to go straight to the impact and then work backwards.

Once everyone has written something in all of the boxes, we go around and ask everyone in turn to explain what they’ve written. This adds some context.

Then, we go around again, and ask everyone to point to things that other people have written that they definitely agree with. This sets the scene for combining ideas into a collaborative final goal.

✅ Good enough for now, safe enough to try

After a quick break, participants are ready to collaborate on a combined final goal. We ask if anyone would like to have a go at filling in one of the boxes. They can do this directly themselves, or we can screenshare and fill it in for them.

After some discussion and iteration of what’s been written, we move onto the other boxes. It’s worth mentioning that the most important thing here is facilitated discussion, which means timeboxing in a way that doesn’t feel rushed.

The phrase to bear in mind is “good enough for now, safe enough to try” which is a slightly different way of “perfect is the enemy of good”.

🔍 Identifying the Outcomes

Getting the goal agreed on by the team is 80% of the work in this session. In our experience, it’s entirely normal for this to take an entire 90-minute session, or even longer.

Moving onto the outcomes, these are statements which support the goal. They are change or achievements that need to happen to help it be achieved; they should be written in a way that it’s possible to say “yes that has happened” or “no it has not”.

For example, “the world is a better place” is not an example of a well-written outcome, but “more people agree that the city is a safer place to live” would work.

Other examples of decent outcomes from different kinds of work might be:

Local biodiversity is enhanced and pollution is reduced. Parents demonstrate improved understanding of internet safety and digital citizenship. Economic diversity within neighbourhoods is increased.

There are several ways we’ve run this part of the workshop, from full-anarchism mode where people just ‘have at it’ through to taking it in turns in an orderly way to add (and then discuss) an outcome.

🚣 Getting to the Activities

People new to ToC workshops often conflate Outcomes and Activities. The easiest way to tell the difference is to ask whether it’s something we’re working towards, or whether it’s something we’re doing.

So, for example, if we take the outcome “Local biodiversity is enhanced and pollution is reduced” some supporting activities might be:

Introduce incentives for creating wildlife-friendly spaces, such as green roofs and community gardens. Run regular river and park clean-up operations to remove pollutants and litter. Enforce stricter regulations on industrial emissions and waste management. Offer subsidies for businesses that implement green practices that reduce pollution and enhance biodiversity. Promote the use of environmentally friendly pesticides and fertilisers in local farming and gardening.

Again, we’ve run workshops where we’ve just had a free-for-all, others where it’s been more orderly, and then others where teams have gone away and come up with the activities outside the session.

Some, in fact, have taken the existing activities they’re engaged with and tried mapping those onto the outcomes. It’s an interesting conversation when those activities don’t map!

💡 Final thoughts

A ToC workshop is a powerful way to chart a course together. It’s a collaborative endeavour for a small group to spend time on. What’s important is strong facilitation, as without it, participants can spend too much time (or not enough!) sharing their thoughts.

If you would like to explore WAO running a ToC workshop for your organisation, get in touch! We also have other approaches and openly-licensed templates that you may want to use and peruse at our Learn with WAO site.

Pathways to Change was originally published in We Are Open Co-op on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wednesday, 07. February 2024

Ceramic Network

CeramicWorld 02

A digest of everything happening across the Ceramic ecosystem for February 2024: ETHDenver, new Orbis, DataFeed API, points, mainnet launches and more!

Welcome to the second edition of CeramicWorld, the monthly Ceramic ecosystem newsletter. Let's dive right in!

🏔️ Attend Proof of Data Summit!

Join Ceramic x Tableland on March 1 in Denver and livestream for Proof of Data Summit, a full-day community gathering on reputation, identity, DePIN, decentralized AI, and decentralized data computing. Featuring lightning talks, technical discussions, and panels with industry visionaries, this is going to be a can't miss event! Don't miss your chance to RSVP now to secure your spot in person or via livestream.

RSVP to Proof of Data Summit 👀 Loading the all new Orbis...

Orbis is expanding beyond social. Driven by developer feedback and a new role as core developers in the Ceramic ecosystem, Orbis’ mission is evolving to offer a simple and efficient gateway for storing and managing open data on Ceramic.

The all new Orbis will provide a developer-friendly SQL interface to explore and query data on Ceramic as well as a user interface and plugin store to save development time on crypto-specific features – from data migration and token gating mechanisms to automated blockchain interactions.

Orbis is built on Ceramic's new Data Feed API, making it fully compatible with ComposeDB. With the new Orbis, developing your project on Ceramic is easier than ever. If you want to learn more or join as an alpha tester, get in touch with the Orbis team on Discord or Twitter.

Learn more about OrbisDB 🔎 Index Network connects LLMs to Ceramic data

Index Network is a composable discovery protocol that enables personalized and autonomous discovery experiences across the web. Index is currently focused on enabling AI agents to query and interact with Ceramic data in a reactive, event-driven manner. Index has also partnered with a number of ecosystem teams like Intuition, Veramo, and more to enable users to create claims and attestations with natural language. Keep an eye out for updates from this team – mainnet launch seems imminent – and check out their documentation.

Learn more about Index 📈 The Ceramic ecosystem is growing

We found it difficult to keep track of all the new projects and initiatives sprouting up throughout the Ceramic ecosystem, so we made this ecosystem map. Let us know if we missed anyone. Enjoy! :)

Ceramic's Data Feed API opens for alpha testing
The Data Feed API is a set of new Ceramic APIs that enable developers to subscribe to the node's data change feed, allowing them to build Ceramic-powered databases and indexing solutions. A number of ecosystem partners are already in testing as we gear up for an early release before EthDenver! ComposeDB now supports interfaces
Interfaces enable standardized data models for interoperability. By defining essential fields that must be shared across models, interfaces facilitate data integration and querying across different models. This is vital for ensuring data consistency, especially in decentralized systems like verifiable credentials. For a detailed overview, see Intro to Interfaces. Get started quickly with create-ceramic-app
The create-ceramic-app CLI tool simplifies the process of starting with the Ceramic network by allowing you to quickly set up a ComposeDB example app. If you're familiar with create-react-app or create-next-app, you should be right at home. If you want to quickly test a Ceramic app locally on your system, simply run npx @ceramicnetwork/create-ceramic-app. This command will guide you through creating a new Ceramic-powered social app in under a minute. Collect attendance badges at ETHDenver 2024!
Ceramic is partnering with Fluence, a decentralized computing marketplace, to put forward a demo that will be in play at each of the events above. You will be able to find us at each event and tap a disc to participate! With each attendance you will claim badges represented as documents on Ceramic. Fluence will be consuming the new Ceramic Data Feed API to enable compute over incoming badges. Deprecation of IDX, Self.ID, Glaze, DID DataStore, 3ID, TileDocuments, Caip10Link
3Box Labs announced the deprecation of a suite of outdated Ceramic development tools including Self.ID, Glaze, DID DataStore, 3ID, 3id-datastore, TileDocuments, and Caip10Link. Due to the improvements in ComposeDB an other Ceramic databases over the last 2 years, these tools saw waning demand, creating significant maintenance overhead while failing to meet our strict UX and security standards. If you're using any of these tools, read this announcement for next steps. Ceramic Community Content FORUM Ceramic protocol minimization? WORKING GROUP Ceramic Points Working Group consisting of 10+ teams formed as a result of this forum post and tweet PODCAST Why to Store ID Data Decentralized with Ceramic TWITTER Oamo launches many new data pools in January BLOG Charmverse x Ceramic: Empowering User Data Ownership in the Blockchain Era TUTORIAL WalletConnect: Create User Sessions with Web3Modal BLOG How Rust delivers speed and security for Ceramic FORUM Ceramic x Farcaster Frames FORUM Making ComposeDB’s composites sharable WORKING GROUP Ceramic Core Devs Notes: 2024-01-02 Upcoming Events Feb 15 Ceramic Core Devs Call Feb 25 - Mar 2 Ceramic x Silk Hacker House (EthDenver): Calling all hackers excited about decentralized tech! Apply to join the Silk EthDenver Hacker House from Feb 25th - March 2nd and take part in revolutionizing scientific tooling, web account UX, governance forums, and more! Participants are encouraged to utilize Ceramic as a decentralized data layer alongside other ecosystem tools like Orbis, EAS and more. (Very limited spots) Feb 25 - Mar 2 DeSci Denver (EthDenver) Feb 27 DePin Day (EthDenver) Feb 28 Open Data Day (EthDenver) Mar 1 Proof of Data Summit (EthDenver) Work on Ceramic JOBS Head of Growth, 3Box Labs (Remote) JOBS Engineering Manager, 3Box Labs (Remote) BOUNTY Build a Ceramic x Scaffold-Eth Module Contact Us

Want to get in touch with the Ceramic core team? Fill out this form (1m). Otherwise, drop us a note in the Forum.


Next Level Supply Chain Podcast with GS1

The Future of Connectivity with Digital Twins, AI, and the Global Supply Chain

Real-time data monitoring is revolutionizing maintenance and efficiency in industries such as aviation and automotive through digital twin technology. Richard Donaldson, host of the Supply Chain Next podcast, is a visionary in supply chain management and circular economy advocate. His  insights on moving from linear to circular supply chains highlight the potential for substantial environme

Real-time data monitoring is revolutionizing maintenance and efficiency in industries such as aviation and automotive through digital twin technology.

Richard Donaldson, host of the Supply Chain Next podcast, is a visionary in supply chain management and circular economy advocate. His  insights on moving from linear to circular supply chains highlight the potential for substantial environmental benefits and the importance of embracing reuse, especially in the context of his work with startups promoting circularity.

The dialogue extends beyond the digital twin to the broader digital transformation of global supply chains, drawing comparisons to the quick adoption of airplane wifi as an example of rapid technological progress. It explores the role of artificial intelligence in supply chain automation and predictive maintenance, touching upon the divide between machine learning and self-actualized thought. The conversation resonates with historical references and Richard's personal entrepreneurial experiences, including his tenure at eBay, his podcast Supply Chain Next, and his perspective on learning from failure. This episode offers a thought-provoking reflection on the future of supply chains and the role of technology in sustainable business practices.

 

Key takeaways: 

The early days of the Internet continue to influence current work in digitizing supply chains.

The global supply chain still lacks full digitization and transparency, particularly in older, established processes.

There is a strong advocacy for shifting towards circular supply chains that are environmentally mindful and focused on sustainability.

 

Connect with GS1 US:

Our website - www.gs1us.org

GS1US on LinkedIn

 

Connect with guest:

Richard Donaldson on LinkedIn

 

Monday, 05. February 2024

FIDO Alliance

ITPro: The end of passwords – and how businesses will embrace it

Big tech firms including Microsoft, Apple and Google have been moving towards a passwordless future for several years, with solutions such as security keys and more recently, passkeys, starting to take off as part of multi-factor authentication […]

Big tech firms including MicrosoftApple and Google have been moving towards a passwordless future for several years, with solutions such as security keys and more recently, passkeys, starting to take off as part of multi-factor authentication (MFA) setups. 

The FIDO Alliance – which most big tech players are members of – is pushing hard for the demise of the password. But what exactly does “the end of the password” mean, in practical terms?


GovTech: Forum Questions Future of Digital Identity, Path Forward

At the recent ID policy forum, the FIDO Alliance, The Identity Theft Resource Center, and other cybersecurity experts discussed the need for new identity verification methods as data breaches reached […]

At the recent ID policy forum, the FIDO Alliance, The Identity Theft Resource Center, and other cybersecurity experts discussed the need for new identity verification methods as data breaches reached record levels in 2023. Panelists argued that relying solely on knowledge-based methods like passwords and Social Security numbers is no longer secure and highlighted the importance of multifactor authentication, passkeys, and biometric checks.


PCMag: Passkeys Are Here: We Just Have to Convince People to Use Them

In a recent identity and authentication conference, Andrew Shikiar, Executive Director of the FIDO Alliance, declared 2023 as the “year of the passkey,” citing 8 billion user accounts with passkey […]

In a recent identity and authentication conference, Andrew Shikiar, Executive Director of the FIDO Alliance, declared 2023 as the “year of the passkey,” citing 8 billion user accounts with passkey access. Shikiar also emphasized the importance of passkeys in enhancing security, streamlining customer experiences, and gradually eliminating the reliance on traditional passwords, while acknowledging ongoing challenges and gaps in support across different industries and platforms.


Content Authenticity Initiative

January 2024 | This Month in Generative AI: Frauds and Scams

News and trends shaping our understanding of generative AI technology and its applications.

Adobe Stock

by Hany Farid, UC Berkeley Professor, CAI Advisor

News and trends shaping our understanding of generative AI technology and its applications.

Advances in generative AI continue to stun and amaze. It seems like every month we see rapid progression in the power and realism of AI-generated images, audio, and video. At the same time, it also seems like we are also seeing rapid advances in how the resulting content is being weaponized against individuals, societies, and democracies. In this post, I will discuss trends that have emerged in the new year.

First it was Instagram ads of Tom Hanks promoting dental plans. Then it was TV personality Gayle King hawking a sketchy weight-loss plan. Next, Elon Musk was shilling for the latest crypto scam, and, most recently, Taylor Swift was announcing a giveaway of Le Creuset cookware. All ads, of course, were fake. 

How it works

Each of these financial scams was powered by a so-called lip-sync deepfake, itself powered by two separate technologies. First, a celebrity's voice is cloned from authentic recordings. Where it used to take hours of audio to convincingly clone a person's voice, today it takes only 60 to 90 seconds of authentic recording. Once the voice is cloned, an audio file is generated from a simple text prompt in a process called text-to-speech. 

In a variant of this voice cloning, a scammer creates a fake audio file by modifying an existing audio file to sound like someone else. This process is called speech-to-speech. This latter fake is a bit more convincing because with a human voice driving the fake, intonation and cadence tend to be more realistic.

Once the voice has been created, an original video is modified to make the celebrity’s mouth region move consistently with the new audio. Tools for both the voice cloning and video generation are now readily available online for free or for a nominal cost.

Although the resulting fakes are not (yet) perfect, they are reasonably convincing, particularly when being viewed on a small mobile screen. The genius — if you can call it that — of these types of fakes is that they can fail 99% of the time and still be highly lucrative for scam artists. More than any other nefarious use of generative AI, it is these types of frauds and scams that seem to have gained the most traction over the past few months. 

Protecting consumers from AI-powered scams

These scams have not escaped the attention of the US government. In March of last year, the Federal Trade Commission (FTC) warned citizens about AI-enhanced scams. And more recently, the FTC announced a voice cloning challenge designed to encourage "the development of multidisciplinary approaches — from products to policies to procedures — aimed at protecting consumers from AI-enabled voice cloning harms, such as fraud and the broader misuse of biometric data and creative content. The goal of the challenge is to foster breakthrough ideas on preventing, monitoring, and evaluating malicious voice cloning."

The US Congress is paying attention, too. A bipartisan bill, the NO FAKES Act, would "prevent a person from producing or distributing an unauthorized AI-generated replica of an individual to perform in an audiovisual or sound recording without the consent of the individual being replicated." 

Acknowledging that there may be legitimate uses of AI-powered impersonations, the Act has carve-outs for protected speech: "Exclusions are provided for the representation of an individual in works that are protected by the First Amendment, such as sports broadcasts, documentaries, biographical works, or for purposes of comment, criticism, or parody, among others." While the NO FAKES Act focuses on consent, Adobe’s proposed Federal Anti-Impersonation Right (the FAIR Act) provides a new mechanism for artists to protect their livelihoods while also protecting the evolution of creative style.

Looking ahead

Voice scams will come in many forms, from celebrity-powered scams on social media to highly personalized scams on your phone. The conventional wisdom of "If it seems too good to be true, it probably is" will go a long way toward protecting you online. In addition, for now at least, the videos often have telltale signs of AI-generation because there are typically several places where the audio and video appear de-synchronized, like a badly dubbed movie. Recognizing these flaws just requires slowing down and being a little more thoughtful before clicking, sharing, and liking.

Efforts are underway to add digital provenance or verifiable Content Credentials to audio. Respeecher, a voice-cloning marketplace gaining traction among creators and Hollywood studios, is adding Content Credentials to files generated with its tool.

For the more personalized attacks that will reach you on your phone in the form of a loved one saying they are in trouble and in need of cash, you and your family should agree on an easy-to-remember secret code word that can easily distinguish an authentic call from a scam.

Subscribe to the CAI newsletter to receive ecosystem news.

Stay connected and consider joining the movement to restore trust and transparency online.

Author bio: Professor Hany Farid is a world-renowned expert in the field of misinformation, disinformation, and digital forensics. He joined the Content Authenticity Initiative (CAI) as an advisor in June 2023. The CAI is an Adobe-led community of media and tech companies, NGOs, academics, and others working to promote adoption of the open industry standard for content authenticity and provenance.

Professor Farid teaches at the University of California, Berkeley, with a joint appointment in electrical engineering and computer sciences at the School of Information. He’s also a member of the Berkeley Artificial Intelligence Lab, Berkeley Institute for Data Science, Center for Innovation in Vision and Optics, Development Engineering Program, and Vision Science Program, and he’s a senior faculty advisor for the Center for Long-Term Cybersecurity. His research focuses on digital forensics, forensic science, misinformation, image analysis, and human perception.

He received his undergraduate degree in computer science and applied mathematics from the University of Rochester in 1989, his M.S. in computer science from SUNY Albany, and his Ph.D. in computer science from the University of Pennsylvania in 1997. Following a two-year post-doctoral fellowship in brain and cognitive sciences at MIT, he joined the faculty at Dartmouth College in 1999 where he remained until 2019.

Professor Farid is the recipient of an Alfred P. Sloan Fellowship and a John Simon Guggenheim Fellowship, and he’s a fellow of the National Academy of Inventors.


Identity At The Center - Podcast

Announcing another episode of The Identity at the Center Pod

Announcing another episode of The Identity at the Center Podcast! Join us as we dive into answering voicemail questions from our listeners. In this episode, we discuss topics such as the barrier of entry to IAM for entry-level roles, the role of IAM architects, influential roles in IAM with the rise of AI, and the choice between using Microsoft Enterprise Identity Protection or a dedicated third-p

Announcing another episode of The Identity at the Center Podcast! Join us as we dive into answering voicemail questions from our listeners. In this episode, we discuss topics such as the barrier of entry to IAM for entry-level roles, the role of IAM architects, influential roles in IAM with the rise of AI, and the choice between using Microsoft Enterprise Identity Protection or a dedicated third-party ITDR (IT Disaster Recovery) solution.

Congrats to listeners Andrew, Alex, Tim, Pedro, and Chris for sending in their questions and winning a digital copy of the book “Learning Digital Identity” by and courtesy of Phil Windley

You can listen to this episode and catch up on all our previous episodes at idacpodcast.com or on your favorite podcast app.

#iam #podcast #idac

Friday, 02. February 2024

Ceramic Network

CharmVerse X Ceramic: Empowering User Data Ownership in the Blockchain Era

Discover how CharmVerse integrated Ceramic to implement a credentialing and rewards system that supports user-sovereign data.

CharmVerse, a pioneering web3 community engagement and onboarding platform, recently integrated ComposeDB on Ceramic to store user attestations for grants and rewards. CharmVerse’s decision to build on Ceramic was driven by the need to store credentials in a user-owned, decentralized manner, without relying on traditional databases.

A who’s who of well-known web3 projects leverage CharmVerse to help manage their community and grants programs. Optimism, Game7, Mantle, Safe, Green Pill, Purple DAO, Orange DAO, Taiko (and the list goes on) have all experienced the need for a unique, web3-centric platform to interact with and empower their ecosystems.

What Objectives Does the Integration Address?

The work of vetting developer teams and distributing grants demands a significant investment of time and focus to ensure responsible treasury deployment. This need-driven use case is a wonderful fit for Ceramic’s capabilities.

CharmVerse identified an opportunity to enhance grants/community managers’ capabilities by implementing a credentialing and rewards system that supports user-sovereign data. This system allows grants managers to better understand their applicants, scale the number of teams they can work with, and issue attestations representing skills and participation in grants and other community programs, creating a verifiable record of participation. However, this solution came with technical challenges in maintaining user data privacy and ownership while ensuring decentralization as this data represents significant insight into the historical activity and capabilities of individuals and teams.

Why did CharmVerse Choose Ceramic?

CharmVerse considered various options but ultimately chose Ceramic due to its unique capability to support decentralized credentials and store attestations in a way that aligned with CharmVerse's vision. Alex Poon, CEO & co-founder of CharmVerse, shared:

“Ceramic's unique approach to data decentralization has been a game changer for us, allowing us to truly empower our users while respecting their privacy, allowing users the choice to keep their data private or publish it on-chain. This integration aligns perfectly with CharmVerse's success metrics, centering on community empowerment and data sovereignty.”

How did CharmVerse Integrate Ceramic?

CharmVerse's integration utilizes Ceramic's ability to store user attestations and leverages Ceramic’s work with the Ethereum Attestation Service (EAS) as the underlying model for supporting decentralized credentials. The integration was not only a technical milestone for CharmVerse but also achieved the strategic goal of appealing to an audience concerned with data privacy and ownership.

More specifically, CharmVerse issues off-chain signed attestations in recognition of important grant program milestones (designed to award these credentials both when users create proposals, and when their grants are accepted). Given Ceramic’s open-access design, we expect to see other teams utilize these credentials issued by CharmVerse as a strong indication of applicant reputation, track record, and ability to deliver.

How to See CharmVerse in Action

This collaboration illustrates the power of innovative solutions in advancing blockchain usability, value, and adoption while maintaining the values of the early cypherpunk vision of decentralization. If you would like to check out this integration and use the tool to manage your own community programs, visit app.charmverse.io and follow the CharmVerse X account for more updates!


FIDO Alliance

Recap: 2024 Identity, Authentication and the Road Ahead Policy Forum

What’s the state of identity and authentication in 2024? That was the primary topic addressed in a day full of insightful speaker sessions and panels at the annual Identity, Authentication […]

What’s the state of identity and authentication in 2024?

That was the primary topic addressed in a day full of insightful speaker sessions and panels at the annual Identity, Authentication and the Road Ahead Policy Forum held on January 25 in Washington D.C. The event was sponsored by the Better Identity Coalition, the FIDO Alliance, and the ID Theft Resource Center (ITRC). 

Topics covered included the latest data on identity theft, financial crimes involving compromised identities and the overall ongoing challenges of identity and authentication. The opportunities for phishing-resistant authentication standards and passkeys resonated throughout the event as well. In his opening remarks, Jeremy Grant of the Better Identity Coalition framed identity as both a cause and potential solution to security problems. 

White House advances strong authentication agenda

In the opening keynote, Caitlin Clarke,  Senior Director, White House National Security Council, detailed some of the steps the Biden-Harris administration is taking to improve digital identity and combat rising cybercrime.

“Money is fuelling the ecosystem of crime, but we often see that identity is either the target or the culprit of the cyber incidents that we are seeing every day,” Clarke said. 

In a bid to help improve the state of identity and authentication, the administration is implementing multi-factor authentication (MFA) for all federal government systems. Clarke also highlighted that the administration strongly believes in implementing phishing-resistant MFA.

“We need to make it harder for threat actors to gain access into systems by requiring and ensuring that a person is who they say they are beyond the username and password,” she said. “That is why authentication is also at the heart of the work we are doing to improve the cybersecurity of critical infrastructure, upon which we all rely.”

The role of biometrics

Biometrics have a role to play in the authentication and identity landscape according to a panel of experts.

The panel included Arun Vemury, Biometrics Expert and ITRC Advisory Board Member; James Lee, COO of the Identity Theft Resource Center; Dr. Stephanie Schuckers, Director, Center for Identification Technology Research (CITeR), Clarkson University; and John Breyault VP, Public Policy, Telecom and Fraud, at National Consumers League.

Panelists generally agreed that properly implemented biometrics combined with other security practices could help devalue stolen identity data and strengthen security overall. 

“Biometrics has the potential to affect fraud numbers,” Breyault said. “It’s not a silver bullet, it’s not going to stop everyone and, it may not be useful in every context, but it is something different than what we’re doing now.”

Better Identity at 5 years

Five years ago, the Better Identity Coalition published Better Identity in America: A Blueprint for Policymakers in response to significant questions from both government and industry about the future of how the United States should address challenges in remote identity proofing and other key issues impacting identity and authentication.

Jeremy Grant, Coordinator at the Better Identity Coalition, detailed the progress made in the past five years and also detailed new guidance for 2024.

The report assessed that while some progress has been made in certain areas like promoting strong authentication, overall the government receives poor grades for failing to prioritize the development of modern remote identity proofing systems or establish a national digital identity strategy. 

The revised blueprint outlines 21 new recommendations and action items for policymakers to help close gaps in America’s digital identity infrastructure and get ahead of growing security and privacy challenges posed by issues like synthetic identity fraud and deep fakes.

“Our message today is the same as it was back in 2018, which is that if you take this as a package, if this policy blueprint is enacted and funded by government, it’s going to address some very critical challenges in digital identity and as the name of our coalition would suggest, make things better,” Grant said.

The year of passkeys

While there is much to lament about the state of identity and authentication, there is also cause for optimism too.

Andrew Shikiar, executive director of the FIDO Alliance detailed the progress that has been made in the past year with the rollout and adoption of passkey deployments.

“Passkeys are simpler, stronger authentication, they are a password replacement,” he said. 

Shikiar noted that there are now hundreds of companies enabling consumers to use passkeys, which is helping to dramatically improve the overall authentication landscape. Not only is a passkey more secure, he also emphasized that it’s easier for organizations to use, than traditional passwords and MFA approaches.

“If you’re in the business of selling things, or providing content, or anything like that you want people to get on your site as quickly as possible –  passkeys are doing that,” he said.

Shikiar noted that the FIDO Alliance understands that user authentication is just one piece of the identity value chain. To that end the FIDO Alliance has multiple efforts beyond passkeys, including certification programs for biometrics and document authenticity certification programs among other efforts.

Don’t want to get breached? Use strong, phishing-resistant authentication

The primary importance of strong authentication was highlighted by Chris DeRusha, Federal Chief Information Security Officer in the  Office of Management and Budget (OMB), who detailed a recent report on a Lapsus cybersecurity gang that was released by the Cyber Safety Review Board. 

DeRusha noted that Lapsus hackers were able to beat MFA prompts using a variety of techniques, including social engineering and even just mass spamming employees with prompts to get someone to act.

A key recommendation from the report is to move away from phishable forms of MFA, including SMS and instead embrace FIDO based authentication with passkeys.

The view from FinCEN

The U.S. Treasury’s Financial Crimes Enforcement Network, more commonly known by the acronym FinCEN, is a critical element of the U.S financial system.

FinCEN Director Andrea Gacki spoke at the event about the agency’s recent progress on beneficial ownership reporting and the FinCEN Identity Project. The FinCEN Identity Project refers to FinCEN’s ongoing work related to analyzing how criminals exploit identity-related processes to perpetuate financial crimes. As part of this, FinCEN published a financial trends analysis earlier this month that looked at 2021 Bank Secrecy Act data to quantify how bad actors take advantage of identity processes during account openings, access, and transactions.

“Robust customer identity processes are the foundation of a secure and trusted U.S. financial system and are fundamental to the effectiveness of every financial institution,” Gacki said.

Sean Evans, lead cyber analyst at FinCEN noted that the recent report examined over 3.8 million suspicious activity reports filed in 2021 and found that approximately 1.6 million reports, representing $212 billion in activity, involved some form of identity exploitation.. Evans explained that cybercriminals are finding ways to circumvent or exploit weaknesses in identity validation, verification, and authentication processes to conduct illicit activities like fraud.

Kay Turner, chief digital identity adviser at FinCEN, emphasized that strengthening identity verification is critical for security. 

“We have to get identity right, it is vital to building trust in the system,” Turner stated.

CISA praises the push towards passkeys

Closing out the event was a keynote from Eric Goldstein, Executive Assistant Director for Cybersecurity, Cybersecurity and Infrastructure Security Agency, (CISA), Department of Homeland Security (DHS).

Goldstein emphasized that it’s important to note that while there are challenges, there has also been progress. Passkeys are now used by consumers everyday and increasing numbers of enterprises are moving toward passwordless deployments.

“It’s worth starting out just with some reflection on how far we have come in moving towards a passwordless future,” Goldstein said.”We are seeing more and more enterprises moving to passwordless for their enterprise privileges, their admin, their their employee authentication solutions and that’s a remarkable shift.”


GS1

Maintenance release 2.8

Maintenance release 2.8 daniela.duarte… Fri, 02/02/2024 - 16:33 Maintenance release 2.8
Maintenance release 2.8 daniela.duarte… Fri, 02/02/2024 - 16:33 Maintenance release 2.8

GS1 GDM SMG voted to implement the 2.8 standard into production in November 2023.

Key Milestones:

See GS1 GDM Release Schedule

As content for this release is developed it will be posted to this webpage followed by an announcement to the community to ensure visibility.
GDSN Data Pools should contact the GS1 GDSN Data Pool Helpdesk to understand the plan for the update. Trading Partners should work with their Data Pools (if using GDSN) and/or Member Organisations on understanding the release and any impacts to business processes.

GDM 2.8 contains updates for two work requests and includes reference material aligned with ADB 2.2 and GDSN 3.1.25

 

Updated For Maintenance Release 2.8

GDM Standard 2.8 (November 2023)

Local Layers For Maintenance Release 2.8

China - GSMP RATIFIED (April 2022)

France - GSMP RATIFIED (November 2023)

Germany - GSMP RATIFIED (November 2023)

Poland - GSMP RATIFIED (November 2023)

Romania - GSMP RATIFIED (17 December 2021)

USA - GSMP RATIFIED (UPDATED February 2023)

Finland - GSMP RATIFIED (November 2023)

 

Release Guidance

GDM Market Stages Guideline (June 2023)

GDM Attribute Implementation Guideline (Nov 2023)

GPC Bricks To GDM (Sub-) Category Mapping (March 2024)

Attribute Definitions for Business (November 2023)

GDM (Sub-) Categories (October 2021)

GDM Regions and Countries (17 December 2021)

GDSN Release 3.1.25 (August 2023)

Tools

GDM Navigator on the Web 

GS1 GDM Attribute Analysis Tool (Nov 2023)

GDM Local Layer Submission Template (May 2023)

Training

E-Learning Course

Any questions?

We can help you get started using GS1 standards.

Contact your local office

Thursday, 01. February 2024

Ceramic Network

WalletConnect Tutorial: Create User Sessions with Web3Modal

Learn how to use WalletConnect's Web3Modal toolset to create Ceramic user sessions in this interactive technical tutorial.

WalletConnect offers Web3 developers powerful tools to make building secure, interactive, and delightful decentralized applications easier. This tooling incorporates best-in-class UX and UI with a modular approach to a suite of SDKs and APIs. For many teams looking to accelerate their development cadence without sacrificing security or quality, WalletConnect's various SDKs are an obvious choice.

One of our favorites is Web3Modal - a toolset that provides an intuitive interface for dApps to authenticate users and request actions such as signing transactions. Web3Modal supports multiple browser wallets (such as MetaMask and Trust Wallet) and offers thorough instruction in their documentation to help developers get up and running across multiple frameworks (React, Next, Vue, etc). For this tutorial, we will show how to use WalletConnect's Web3Modal for user authentication and the creation of user sessions.

Ready? Awesome! Let's get started

What Will We Build?

For this tutorial, we will build an application to track event attendance. The use case here is somewhat basic - imagine a conference that wants to keep track of which participants went to which event. They might allow participants to scan a QR code that takes them to this application where they can sign in (with their wallet), optionally opt into sharing their location, and generate a badge showing that they attended.

Here's a simple visual of the user flow:

Based on the summary above, it might be obvious where Web3Modal fits in. That's right - we will be using this SDK to authenticate users and keep track of who attended what event based on their wallet address.

We've made up two imaginary events to align with this use case:

Encryption Event Wallet Event

Below is a sneak peek at our app's UI:

What's Included in Our Technical Stack?

To power this simple application, we will need a few things:

A frontend framework that runs in the attendee's browser and a backend to handle any internal API calls we'll need - we will use NextJS Wallet tooling so we don't have to build authentication logic from scratch - Web3Modal React hooks that work with our browser wallet so we don't have to build these either - we'll use Wagmi Decentralized data storage - we'll use ComposeDB (graph database built on Ceramic) Why ComposeDB?

If dealing with potentially thousands (or more) attendees to these imaginary events (as is often the case with large conferences), storing these records on-chain would be both costly and inefficient. Each record would incur gas fees, and querying the blockchain across tens of thousands of records would be arduous.

Nonetheless, we want our application to give data control to the users who attend the events. And, in our imaginary use case, other conferences must have access to this data (not just our application) so they can determine who should receive admission priority. We will therefore require some sort of decentralized data network.

In Ceramic (which is what ComposeDB is built on), user data is organized into verifiable event streams that are controlled exclusively by the user account that created each stream. Since Ceramic is a permissionless open data network, any application can easily join and access preexisting user data (which meets one of the requirements listed above).

Applications that build on Ceramic/ComposeDB authenticate users (using sign-in with Ethereum), creating tightly-scoped permission for the application to write data to the network on the user's behalf. This is important for us because our application's server will need to cryptographically sign the badge (to prove the badge was indeed generated through our application) before saving the output in Ceramic on the user's behalf.

Finally, ComposeDB adds a graph database interface on top of Ceramic, making it easy to query, filter, order, and more (using GraphQL) across high document volumes - an ideal fit for any teams who want to consume these badges and perform computation over them in an efficient manner.

We will go into more detail throughout this tutorial.

Getting Started

We have set up a special repository for you to help guide you through - keep in mind that we will need to add to it using the below steps for it to work.

Start by cloning the demo application repository and install your dependencies:

git clone https://github.com/ceramicstudio/walletconnect-tutorial cd walletconnect-tutorial npm install

Go ahead and open the directory in your code editor of choice. If you take a look at your package.json file, you'll see our@web3modal/wagmi and wagmi packages mentioned above, as well as several @ceramicnetwork and @composedb packages to meet our storage needs.

Obtain a WalletConnect Project ID

While your dependencies are downloading, you can create a WalletConnect project ID (which we'll need to configure our Web3Modal - more information on their docs). You can do so for free by visiting their WalletConnect Cloud site, creating a new project (with the "App" type selected), and a name of your choosing:

After you click "Create" you will be directed to the settings page for the project you just set up. Go ahead and copy the alphanumeric value you see next to "Project ID."

Back in your text editor, navigate to your /src/pages/_app.tsx file and enter the ID you just copied into the blank field next to the projectId constant. Notice how we use this ID and a mainnet chain setting when defining our wagmiConfig (later used to create our Web3Modal). Just as the Web3Modal docs instructed, we are setting up these functions outside our React components, and wrapping all child components with our WagmiConfig wrapper:

const projectId = '<your project ID>' const chains = [mainnet] const wagmiConfig = defaultWagmiConfig({ chains, projectId }) createWeb3Modal({ wagmiConfig, projectId, chains }) const MyApp = ({ Component, pageProps }: AppProps) => { return ( <WagmiConfig config={wagmiConfig}> <ComposeDB> <Component {...pageProps} ceramic /> </ComposeDB> </WagmiConfig> ); } export default MyApp

We can now make our Web3Modal button accessible to child components of our application to allow our users to sign in. If you take a look at /src/components/nav.tsx, you'll see that we placed our <w3m-button /> component directly into our navigation to allow users to sign in/out on any page of our application (at the moment our application only has 1 page).

Notice how we make use of the size and balance properties - these are two of several settings developers can use to further customize the modal's appearance. These two in particular are fairly simple to understand - one alters the size of the button, while the other hides the user's balance when the user is authenticated.

Finally, you probably noticed in your /src/pages/_app.tsx file that we're also utilizing a <ComposeDB> context wrapper. This is what we will explain next.

Create a ComposeDB Configuration

Now that we've created our Wagmi configuration, we will need to set up our ComposeDB data storage. There are several steps involved (all of which have been taken care of for you). These include:

Designing the data models our application will need Creating a local node/server configuration for this demo (in production) Deploying our data models onto our node Defining the logic our application will use to read from + write to our ComposeDB node

Data Models

If you take a look at your /composites folder, you'll see an /attendance.graphql file where we've already defined the models our application will use. In ComposeDB, data models are GraphQL schema that contain the requirements for a single piece of data (a social post, for example), in addition to its relations to other models and accounts. Since Ceramic is an open data network, developers can build on preexisting data models (you can explore tools like S3 to observe existing schema), or define brand new ones for your app.

In our case, our application will leverage a general event interface that our two event types will implement:

interface GeneralAttendance @createModel(description: "An interface to query general attendance") { controller: DID! @documentAccount recipient: String! @string(minLength: 42, maxLength: 42) latitude: Float longitude: Float timestamp: DateTime! jwt: String! @string(maxLength: 100000) } type EncryptionEvent implements GeneralAttendance @createModel(accountRelation: SINGLE, description: "An encryption event attendance") { controller: DID! @documentAccount recipient: String! @string(minLength: 42, maxLength: 42) latitude: Float longitude: Float timestamp: DateTime! jwt: String! @string(maxLength: 100000) } type WalletEvent implements GeneralAttendance @createModel(accountRelation: SINGLE, description: "A wallet event attendance") { controller: DID! @documentAccount recipient: String! @string(minLength: 42, maxLength: 42) latitude: Float longitude: Float timestamp: DateTime! jwt: String! @string(maxLength: 100000) }

Notice how we've set the accountRelation field for both types to "SINGLE" - what this means is that 1 user can only ever have 1 model instance of that type, thus creating a 1:1 account relationship. This is contrasted with "LIST" accountRelation which would indicate a 1:many relationship.

You'll also notice that our latitude and longitude fields do not use a ! next to their scalar definition - what this means is that they are optional, so a model instance can be created with or without these fields defined.

Finally, we will use our jwt field to record the signed badge payload our server will create for the user. Since the user will ultimately be in control of their data, a potentially deceptive could try to change the values of their model instance outside the confines of our application. Seeing as our architecture requires a way for both our application and other conferences to read and verify this data, the jwt field will create tamper-evident proof against the values by tying the cryptographic signature of our application's DID together with the data.

Create a Local Server Configuration

Seeing as this is just a demo application and we don't have a cloud-hosted node endpoint to access, we will define a server configuration to run locally on our computer. While there are multiple server settings an application can leverage, the key items to know for this demo are the following:

Our app will run inmemory whereas a production application will use mainnet for their network setting Our server will define sqlite as our SQL index, whereas a production application would use PostgreSQL Our IPFS will run in bundled mode (ideal for early prototyping), whereas a production application will run in remote

Finally, each Ceramic node is configured with an admin DID used to authenticate with the node and perform tasks like deploying models. This is different from the DIDs end users will use when authenticating themselves using their wallet and writing data to the network.

Fortunately for you, we've taken care of this for you by creating a command. Simply run the following in your terminal once your dependencies are installed:

npm run generate

If you take a look at your admin_seed.txt file you will see the admin seed your Ceramic node will use. Your composedb.config.json file is where you'll find the server configuration you just created.

Deploying the Models onto Our Node

Seeing as we're not using a preexisting node endpoint that's already set up to index the data models we care about, we'll need a way to deploy our definitions onto our node. If you look at /scripts/composites.mjs you'll find a writeComposite method we've created for you that reads from our GraphQL file, creates an encoded runtime definition and deploys the composite onto our local node running on port 7007.

The important thing to take note of here is how the writeEncodedCompositeRuntime method generates a definition in our definition.js file. We will explain in the next step how this is used by our client-side library to allow our application to interact with these data models and our Ceramic node.

Don't take any action yet - we will explain how to use this script in the coming steps.

Integrating ComposeDB with Our Application

Finally, as mentioned above, we will need a way for our application to read from and write to our ComposeDB node. We will also need a way to combine our Web3Modal authentication logic with the need to authenticate users onto our node.

If you take a look at /src/fragments/index.tsx you'll find a ComposeDB component that allows us to utilize React's createContext hook and create a wrapper of our own. Since we know Web3Modal will make use of our wallet client, we can leverage the wallet client to request a Ceramic user session authentication from our user.

Observe the following:

const CERAMIC_URL = process.env.URL ?? "http://localhost:7007"; /** * Configure ceramic Client & create context. */ const ceramic = new CeramicClient(CERAMIC_URL); const compose = new ComposeClient({ ceramic, definition: definition as RuntimeCompositeDefinition, }); let isAuthenticated = false; const Context = createContext({ compose, isAuthenticated }); export const ComposeDB = ({ children }: ComposeDBProps) => { function StartAuth() { const { data: walletClient } = useWalletClient(); const [isAuth, setAuth] = useState(false); useEffect(() => { async function authenticate( walletClient: GetWalletClientResult | undefined, ) { if (walletClient) { const accountId = await getAccountId( walletClient, walletClient.account.address, ); const authMethod = await EthereumWebAuth.getAuthMethod( walletClient, accountId, ); const session = await DIDSession.get(accountId, authMethod, { resources: compose.resources, }); await ceramic.setDID(session.did as unknown as DID); console.log("Auth'd:", session.did.parent); localStorage.setItem("did", session.did.parent); setAuth(true); } } void authenticate(walletClient); }, [walletClient]); return isAuth; } if (!isAuthenticated) { isAuthenticated = StartAuth(); } return ( <Context.Provider value={{ compose, isAuthenticated }}> {children} </Context.Provider> ); };

Notice how we're using the wallet client's account address to initiate a DID session that asks for specific resources from compose. If you track deeper, you'll see that compose was instantiated using the definition imported from the file our deployment script wrote into. This allows us to access a limited scope to write data on the user's behalf specifically for the data models our application uses (these sessions auto-expire after 24 hours).

Finally, to bring this full circle, back to our /src/pages/_app.tsx file, you should now understand how we're able to use ComposeDB as a contextual wrapper, enabling us to access both the ComposeDB client libraries and our model definitions from within any child component. For example, if you take a look at /src/components/index.tsx you'll see how we're now able to utilize our useComposeDB hook that allows us to run queries against our node's client.

Create a Seed for our Application's Server DID

We mentioned above that we'll want our application to sign each badge payload before handing document control back to the end user. While this flow will not always be the case (read this blog on common data control patterns in Ceramic for more), we'll want to implement this to ensure the verifiability of the data.

In /src/pages/api/create.ts we've created an API our application's server will expose that does exactly this - it intakes the data relevant to the event, uses a SECRET_KEY environment variable to instantiate a static DID, and returns a Base64 string-encoded JSON web signature containing the signed data.

We will therefore need to create a separate static seed to store in a .env file that we'll create:

touch .env

For this tutorial, enter the following key-value pair into your new file:

SECRET_KEY="11b574d316903ced6cc3f4787bbcc3047d9c72d1da4d83e36fe714ef785d10c1"

When you use the above seed to instantiate a DID, this will yield the following predictable did:

did:key:z6MkqusKQfvJm7CPiSRkPsGkdrVhTy8EVcQ65uB5H2wWzMMQ

If you look back into /src/components/index.tsx you'll see how our lengthy getParams the method performs a quick check against any existing EncryptionEvent or WalletEvent badges the user already holds to test whether the jwt value was indeed signed by our application (a more thorough version of this could include verifying that the signed data matches the same values from the other fields, but we'll leave that up to you to add).

That's it! We are finally ready to run our application!

Running the Application

Now that we've set up everything we need for our app to run locally, we can start it up in developer mode. Be sure to select the correct node version first:

nvm use 20 npm run dev

Once you see the following in your terminal, your application is ready to view in your browser:

In your browser, navigate to http://localhost:3000 - you should see the following:

Signing in with Web3Modal

As mentioned above, we've made our Web3Modal accessible from our navigation which is where our "Connect Wallet" button is coming from. Go ahead and give this button a click and select your wallet of choice.

During the sign-in cadence, you will notice an additional authorization message appear in your wallet that looks something like this:

If you recall what we covered in the "Integrating ComposeDB with Our Application" section above, you'll remember that we discussed how we created a DIDSession by requesting authorization over the specific resources (data models) our application will be using. These are the 3 items listed under the "Resources" section of the sign-in request you should see.

Finally, after you've signed in, your Web3Modal will now show a truncated version of your address:

Creating Badges

As you can see, our application does not allow the user to input which event they have attended - this will be determined based on the URL the QR code sends the user with the following format:

http://localhost:3000/?event={event id}

Take a look at your browser console - you should see logs that look similar to this:

We've preset these logs for you by reading from our runtime composite definition that we've imported into the /src/components/index.tsx component. Go ahead and copy one of those fields and construct your URL to look something like this:

http://localhost:3000/?event=kjzl6hvfrbw6c8njv24a3g4e3w2jsm5dojwpayf4pobuasbpvskv21vwztal9l2

If you've copied the stream ID corresponding to the EncryptionEvent model, your UI should now look something like this:

You can optionally select to share your coordinates. Finally, go ahead and create a badge for whichever event you entered into your URL:

If you navigate back to your /src/components/index.tsx file you can observe what's happening in createBadge. After calling our /api/create route (which uses our application server's static DID to sign the event data), we're performing a mutation query that creates an instance of whichever event aligns with the identifier you used in your URL parameter. Since our user is the account currently authenticated on our node (from the creation of our DID session), the resulting document is placed into the control of the end user (with our tamper-evident signed data entered into the jwt field).

If you take a look at our getParams method in our /src/components/index.tsx file, you'll notice that we've created a query against our ComposeDB node that runs both within our useEffect React hook as well as after every badge creation event. Notice how we're querying based on the user's did:pkh: did:pkh:eip155:${chainId}:${address?.toLowerCase()}

If you take a look at our chainId and address assignments, you'll realize these are coming from our Wagmi hooks we mentioned we'd need (specifically useAccount and useChainId).

What's Next?

We hope you've enjoyed this fairly straightforward walk-through of how to use WalletConnect's Web3Modal toolkit for authenticating users, creating user sessions in Ceramic, and querying ComposeDB based on the authenticated user! While that's all for this tutorial, we encourage you to explore the other possibilities and journies Ceramic has to offer. Below are a few we'd recommend:

Test Queries on a Live Node in the ComposeDB Sandbox

Build an AI Chatbot on ComposeDB

Create EAS Attestations + Ceramic Storage

Finally, we'd love for you to join our community:

Join the Ceramic Discord

Follow Ceramic on Twitter


MyData

How to build a human-centric data space: introducing the Blueprint for the European Data Space for Skills & Employment

The DS4Skills project, funded by the European Commission under the Digital Europe Programme and led by DIGITALEUROPE, has launched its Blueprint for a European Data Space for Skills & Employment (https://skillsdataspace-blueprint.eu/), a comprehensive guide for creating and managing data spaces that collect, store, and share skills & employment data.  A human-centric Data Space for Ski
The DS4Skills project, funded by the European Commission under the Digital Europe Programme and led by DIGITALEUROPE, has launched its Blueprint for a European Data Space for Skills & Employment (https://skillsdataspace-blueprint.eu/), a comprehensive guide for creating and managing data spaces that collect, store, and share skills & employment data.  A human-centric Data Space for Skills […]

DIDAS

Embracing Standardization in Employee Verification

In today's fast-moving business world, it's important for different systems to work well together, and having standard data formats helps a lot with this. This article investigates how the EmployeeID is used, which is a key part of Self-Sovereign Identity (SSI). The adoption group from DIDAS has worked on making sure this EmployeeID follows a ...

In today’s fast-moving business world, it’s important for different systems to work well together, and having standard data formats helps a lot with this. This article investigates how the EmployeeID is used, which is a key part of Self-Sovereign Identity (SSI). The adoption group from DIDAS has worked on making sure this EmployeeID follows a set standard. Thanks to their work, we can now see how the EmployeeID can be used in real situations and think about how it can be used more in the future.  

Demonstrating Real-World Applications 

The practicality of the EmployeeID VC was tested in real-world scenarios by organizations SBB, AXA, Orell Füssli, and Swisscom. Together they showed the power of an interoperable eco system based on the credential “EmployeeID”. These demos showcased varied applications: 

SBB: Used the VC for external partners to access internal IT systems, replacing traditional username/password methods.  AXA: Employed the VC to verify employment status for online insurance offerings.  Orell Füssli & Swisscom: Utilized the VC to offer employee discounts in online shopping. 

More information here: Successfully testing the potential of digital evidence from the private sector  

Collaboration: The Key to Adoption 

One of the primary goals was to ensure that the EmployeeID could seamlessly integrate into various ecosystems. This required a standardized approach that would allow different systems and organizations to interact without compatibility issues.  

This process was not an isolated endeavor. It involved a collaborative effort from experts and stakeholders across different industries. By pooling their knowledge and insights, the group could identify and agree on the most relevant and sustainable attributes. This collaborative approach was instrumental in developing a schema that was not only effective for the present but also robust enough to stand the test of time and technological evolution. 

The outcome of these pre-implementation discussions was a well-thought-out, standardized EmployeeID schema. This schema serves as a testament to the importance of foresight and collaboration in the digital age. By addressing the need for standardization and futureproofing at the initial stages, the EmployeeID schema was positioned to be an enduring and versatile tool, capable of adapting to the ever-changing business and technological landscapes. 

The EmployeeID Schema 

The EmployeeID schema incorporates various attributes, such as: 

Employee Core: A blend of personal and employer data.  Employment Contract: Key contractual information like contract type and working hours.  Role: The employee’s role and organizational unit.  Office Address: The physical office location.  Authorization: Specific access rights issued as separate VCs. 

While many attributes are initially represented as free text in the logical schema, they should ideally be derived from predefined lists or tables for specific implementations. This becomes especially critical when exchanging information with other organizations, necessitating mutually agreed-upon value sets. 

Detailed Schema: Detailed Schema of EmployeID  

Conclusion 

For organizations exploring use cases with SSI or sandbox environments, it’s crucial to engage with the DIDAS adoption group. This collaboration will help define a standard for your verifiable credentials, ensuring a unified approach and compatibility across different ecosystems. 

In conclusion, the adoption of a standardized EmployeeID schema, as illustrated by DIDAS adoption group, is not just a technical necessity but a strategic move towards interoperability and efficiency in the digital age.  


Velocity Network

Neeyamo trailblazes the use of the Internet of Careers® in India

Neeyamo trailblazes the use of the blockchain-based Internet of Careers® in India to accelerate background screening. The post Neeyamo trailblazes the use of the Internet of Careers® in India appeared first on Velocity.

Wednesday, 31. January 2024

DIF Blog

Veramo User Group

DIF is thrilled to announce the donation of the Veramo project and the formation of the Veramo User Group. Veramo is a broadly-used SDK that is a popular choice for SSI implementation, providing core functionality such as Decentralized Identifier (DID) and Verifiable Credential (VC) management cryptographic key management secure peer-to-peer

DIF is thrilled to announce the donation of the Veramo project and the formation of the Veramo User Group.

Veramo is a broadly-used SDK that is a popular choice for SSI implementation, providing core functionality such as

Decentralized Identifier (DID) and Verifiable Credential (VC) management cryptographic key management secure peer-to-peer communications through DIDComm

Veramo is a powerful toolkit and base layer that provides the scaffolding to build applications leveraging decentralized identity. Its flexible plugin model and extensible and accessible APIs make it easy to integrate into, and update your stack as the technology evolves, helping you keep up with new protocols, standards and implementations.

“The donation of the Veramo codebase, as well as the formation of the User Group, are significant milestones for Veramo and for DIF. This generous donation will allow the DIF community to contribute to the future of Veramo and provide valued governance” said DIF’s Executive Director, Kim Hamilton Duffy.

Housing Veramo within DIF allows the project to harness unparalleled expertise in decentralized identity, promoting collaboration and driving innovation, while enabling Veramo to solidify its role as a productivity accelerator and reference to ensure interoperability. 

“By providing the DIF’s platform and contributions of the leading decentralized identity experts, while remaining open to all, regardless of DIF membership, Veramo’s new home at DIF will ensure it continues to evolve into the leading decentralized identity toolkit, enabling builders of future identity solutions to move faster,” Kim added. 

"We were proud to donate Veramo to DIF and reaffirm our commitment to public goods development in this space. Now, with the formation of the User Group, we're excited to work even more closely with the DIF community and ensure we can all build a framework that meets everyone's needs" said Head of Identity at Consensys Mesh R&D, Nick Reynolds.

How you can get involved

Join the Veramo User Group. The first meeting takes place at 15.00 CET on Thursday 15th January. Meetings will take place weekly, alternating between Noon EST / 18.00 CET and 09.00 EST / 15.00 CET.  Click here for more details Use the code https://github.com/decentralized-identity/veramo Contribute to the code

More details on #3:

If you are looking to become more active in decentralized identity, or open source generally, Veramo is a great way to get started and join this vibrant community. We are looking for help requiring a wide range of expertise:

Build and/or integrate support for additional standards and protocols, such as Presentation Exchange, SD-JWT, and Aries. In some cases, the implementations exist and just need to be integrated into the Veramo framework. Support for new cryptographic key types Process/build improvements: helping with formatting and linting commit hooks, developing a test harness for improved test automation

Finally, consider joining DIF. The Decentralized Identity Foundation opens the door to shaping the future of identity standards and SDKs, offering unmatched opportunities to influence and drive the evolution of digital identity at a global scale. Membership not only places you at the forefront of cutting-edge developments but also embeds you within a community of innovators and thought leaders dedicated to redefining the landscape of digital identity. Find out more here.


DIF Newsletter #37

January 2024 DIF Website | DIF Mailing Lists | Meeting Recording Archive Table of contents Decentralized Identity Foundation News; 2. Working Group Updates; 3. Open Groups; 4. Announcements at DIF; 5. Community Events; 6. DIF Members; 7. Get involved! Join DIF 🚀 Decentralized Identity Foundation News Veramo User Group DIF is thrilled

January 2024

DIF Website | DIF Mailing Lists | Meeting Recording Archive

Table of contents Decentralized Identity Foundation News; 2. Working Group Updates; 3. Open Groups; 4. Announcements at DIF; 5. Community Events; 6. DIF Members; 7. Get involved! Join DIF 🚀 Decentralized Identity Foundation News

Veramo User Group

DIF is thrilled to announce the donation of the Veramo project and the formation of the Veramo User Group.

Veramo is a broadly-used SDK that is a popular choice for SSI implementation, providing core functionality such as

Decentralized Identifier (DID) and Verifiable Credential (VC) management cryptographic key management secure peer-to-peer communications through DIDComm

Veramo is a powerful toolkit and base layer that provides the scaffolding to build applications leveraging decentralized identity. Its flexible plugin model and extensible and accessible APIs make it easy to integrate into, and update your stack as the technology evolves, helping you keep up with new protocols, standards and implementations.

“The donation of the Veramo codebase, as well as the formation of the User Group, are significant milestones for Veramo and for DIF. This generous donation will allow the DIF community to contribute to the future of Veramo and provide valued governance” said DIF’s Executive Director, Kim Hamilton Duffy.

Housing Veramo within DIF allows the project to harness unparalleled expertise in decentralized identity, promoting collaboration and driving innovation, while enabling Veramo to solidify its role as a productivity accelerator and reference to ensure interoperability. 

“By providing the DIF’s platform and contributions of the leading decentralized identity experts, while remaining open to all, regardless of DIF membership, Veramo’s new home at DIF will ensure it continues to evolve into the leading decentralized identity toolkit, enabling builders of future identity solutions to move faster,” Kim added. 

"We were proud to donate Veramo to DIF and reaffirm our commitment to public goods development in this space. Now, with the formation of the User Group, we're excited to work even more closely with the DIF community and ensure we can all build a framework that meets everyone's needs" said Head of Identity at Consensys Mesh R&D, Nick Reynolds.

How you can get involved

Join the Veramo User Group. The first meeting takes place at 15.00 CET on Thursday 15th January. Meetings will take place weekly, alternating between Noon EST / 18.00 CET and 09.00 EST / 15.00 CET.  Click here for more details Use the code https://github.com/decentralized-identity/veramo Contribute to the code

More details on #3:

If you are looking to become more active in decentralized identity, or open source generally, Veramo is a great way to get started and join this vibrant community. We are looking for help requiring a wide range of expertise:

Build and/or integrate support for additional standards and protocols, such as Presentation Exchange, SD-JWT, and Aries. In some cases, the implementations exist and just need to be integrated into the Veramo framework. Support for new cryptographic key types Process/build improvements: helping with formatting and linting commit hooks, developing a test harness for improved test automation

First DIF-sponsored Hackathon inspires participants to discover the power of decentralized identity.

The first ever DIF-sponsored hackathon wrapped up with a Meet The Winners Twitter Space earlier this month which highlighted how the event has boosted skills development, community engagement and participation in DIF. 

422 developers registered and 52 projects were submitted, surpassing expectations. Here's a quick overview of the winners of the DIF Main Prize Pool:

First Prize: Decentralinked
Second Prize: Anonymous Door Unlocking (Meet The Winners blog here)
Third Prize: HealthX Protocol (Meet The Winners blog here)
Honorable Mention: TrustBox  (Meet The Winners blog here)
Honorable Mention: Mail5

Watch out for more Meet The Winners blog posts, and for future events! As DIF hackathons grow, we’ll need more volunteer developer advocates and online event organizers. If you’re interested in volunteering at DIF please reach out to us at membership@identity.foundation.

More on the DIF Hackathon here.

SIDI Hub 2024 roadmap unveiled

The Sustainable & Interoperable Digital Identity (SIDI) Hub's 2024 work plan was announced earlier today.

Dozens of digital identity schemes have been launched or are underway around the world. Yet to date, there is no known scheme considered truly interoperable across borders. SIDI Hub was was conceived as a community to accelerate the path to cross-border interoperability.

SIDI Hub held its first summit at TRUSTECH 2023. DIF was one of the event organizers, alongside our liaison partners including the Open ID Foundation, Open Identity Exchange, FIDO and Open Wallet Foundation.

120+ digital identity experts from governments, multilaterals, standards organizations, and non-profits representing 22 countries attended. Over 90% of participants agreed that the work started at the summit must continue in 2024. In response, the SIDI Hub community has defined the following work plan:

Identifying champion use cases for cross-border interoperability that serve as baseline for all workstreams Defining minimum interoperability requirements for priority use cases Mapping trust frameworks across jurisdictions Defining metrics of success

The group is organizing a series of virtual and in-person meetings this year to progress the roadmap, and invites all organizations involved in the development, adoption and implementation of digital identity solutions to add their voice to this important work. Join the community on LinkedIn, visit the website and sign up for the SIDI Hub newsletter to learn more and stay up to date on the latest news and events.

🛠️ Working Group Updates 💡Identifiers and Discovery Work Group

The Identifiers and Discovery Work Group hosted a presentation and discussion of the did:dht method, as well as a discussion of DID Rotation, and integrating support for it in the Universal Resolver and Universal Registrar.

The work item "Linked Verifiable Presentations" is progressing with dedicated monthly calls. See https://identity.foundation/linked-vp/

The Work Group seeks input on recent activity for this work item. Please visit our GitHub page and review the issues and PRs!
https://github.com/decentralized-identity/linked-vp

Recordings and notes can be found here: https://github.com/decentralized-identity/identifiers-discovery/blob/main/agenda.md

Identifiers and Discovery Work Group meets bi-weekly at 11am PT/ 2pmET/ 8pm CET Mondays

🔐 Applied Cryptography WG

The BBS Signature Scheme continues on its path towards becoming an official web standard after Draft 05 of the specification was published by the Internet Engineering Task Force (IETF) last month.

The BBS Signature Scheme is a secure, multi-message digital signature protocol that supports proving knowledge of a signature while selectively disclosing any subset of the signed messages. Being zero-knowledge, the BBS proofs do not reveal any information about the undisclosed messages or the signature itself, while at the same time, guarantying the authenticity and integrity of the disclosed messages.

The latest update to the specification follows ongoing work on the spec by DIF's Applied Cryptography Work Group.

Draft 05 of the specification can be viewed here.

The DIF Crypto - BBS work item meets weekly at 11am PT/2pm ET /8pm CET Mondays

📦 Secure Data Storage

Decentralized Web Node (DWN) Task Force
The DWN beta release is out! Check out the reference implementation and Web SDK with easy methods to use with it for client apps, here

The DWN Task Force still needs help with updating the draft specification to match the reference implementation, and warmly invites all DIF members to join us and listen in on progress.

TBD, a division of Block, has also been working on open-source tooling and onboarding to DWNs, in addition to the core sample implementation. Also a reminder that the DWN companion guide is available.

DIF/CCG Secure Data Storage WG - DWN Task Force meets bi-weekly at 9am PT/12pm ET/6pm CET Wednesdays

Claims & Credentials Working Group

Work on Trust Establishment continues, led by Sam Curren. The calls take place weekly on Monday at 10am PT.

If you are interested in participating in any of the Working Groups highlighted above, or any of DIF's other Working Groups, please click here.

📖 Open Groups at DIF Korea Special Interest Group

Since the Blockchain Grand Week event in November 2023, awareness of DIF has increased considerably in Korea, reports DIF Korea SIG chairman Kyoungchul Park.

"Since the Financial Services Commission has established association standards for the financial sector, we will look at those standards together, including DIF, W3C, ITU-T and ISO.

"We are also interested in exploring Digital Product Passports, and how they can be applied to high-value products to highlight their unique value and resolve consumer concerns about counterfeit goods," Kyoungchul added.

The SIG has selected discussion on standards related to e-wallets as its main focus for 2024, and will change the current bimonthly meeting to a monthly meeting to facilitate this.

Planned activities include IITP / NIPA consultation and promotion under the Ministry of Science and ICT, KISA seminars under the Ministry of Public Administration and Security and participation in the Information Protection Society and Association's seminar and promotion schedule.

The new meeting time and joining details for the Korea SIG will be announced soon.

DIF Interoperability Group

The Interoperability Open Group is focusing its Q1 2024 efforts towards building an interoperability map comprised of methods/standards/SDKs relevant to interoperability (vendor agnostic). The map aims to serve all DIF working groups and members currently working on solving interoperability challenges. The Interop group's chairs are meeting with other DIF working group chairs and attending DIF meetings this quarter to learn what interoperability challenges the groups are facing. These will be the initial focus areas represented on the interoperability map.

The Interoperability Group meets bi-weekly at 8am PT/11am ET/5pm CET Wednesdays

📡 DIDComm User Group

A reminder that the DIDComm Demo is available. This developer-focused tool allows you to connect to another person, a computer, a phone or simply another window in a different browser tab so you can see DIDComm messages traverse back and forth after the messages have been decrypted. 

The app was developed at Indicio, with the goal to allow people to see how DIDComm works without needing to sift through or learn a substantial stack like Hyperledger Aries Cloud Agent Python (ACA-Py).

Why not try it out for yourself . You’ll see a Help button there with a tutorial, plus a link to the GitHub repo. 

The DIDComm user group meets weekly at 12pm PT/3pm ET/ 9pm CET Mondays

🌏APAC / ASEAN Discussion Group

Group participant Finema has developed a comprehensive overview of GLEIF’s vLEI Ecosystem Governance Framework.

The overview, published in an article on Medium, provides a visual guide for different types of stakeholders participating in the ecosystem, the collective identifiers and key management for vLEI, and the 6 variations of vLEI Credentials.

Co-author Yanisa Sunanchaiyakarn will present the overview to the Discussion Group on the February / March call.

We invite everyone in the APAC region to join our monthly calls and contribute to the discussion. You will be able to find the minutes of the latest meeting here.

The DIF APAC call takes place Monthly on the 4th Thursday of the month. Please see the DIF calendar for updated timing.

📢 Announcements at DIF

DIF China SIG: pre-launch event
The DIF China Special Interest Group (SIG) is holding a pre-launch event at 9:30am Beijing time on Friday 2 February / 8.30pm EST on Thursday 1 Feb.

The SIG will be addressed by DIF's Executive Director, Kim Hamilton-Duffy and SIG chairman Xie Jiagui, Chief Engineer at China's Institute for Industrial Internet & Internet of Things, who will introduce the SIG and its future plans, followed by short presentations by SIG members, who will share their work and their thoughts about decentralized identity.

Participants will have an opportunity to ask Kim & Mr Xie questions about DIF, the China SIG and related topics.

Please note: the pre-launch event is taking place on Tencent, due to issues accessing Zoom within China. To join the event, download VooV, login using your Google account, and join the meeting using the following credentials:

#Tencent Meeting Number:540-592-920

#Tencent Meeting Password:227973

Meeting URL if needed: https://meeting.tencent.com/dm/6LSxx0fkDvlc

Here's a screenshot of the login screen:

You can read more about the SIG's goals and related projects here, or follow the SIG's Chinese WeChat channel here.

MOSIP Connect 2024

DIF Steering Committee member Yodahe Zemichael, who serves as Executive Director of National ID Ethiopia, will represent DIF at MOSIP Connect in Addis Ababa in March.

MOSIP is a university-incubated not-for-profit that offers countries modular and open-source technology to build and own their national identity systems. The project was established in 2018 at the International Institute for Information Technology in Bangalore. Today, over 100 million citizens in 11 countries are registered on MOSIP-based systems.

The inaugural MOSIP Connect promises to bring together technologists, policy makers and civil society groups for in-depth discussions and collaboration on inclusive digital identity systems. Participants will learn how digital identity empowers citizens and promotes inclusion, explore how MOSIP addresses concerns around data protection, privacy, and consent, and benefit from networking opportunities.

🗓️ ️Community Events

Digital Switzerland

DIF’s Executive Director, Kim Hamilton Duffy, participated in a panel discussion on the importance of digital identity wallets alongside Beat Jans, the new Head of Switzerland’s Federal Ministry of Justice, who is responsible for the country’s digital identity policy, Stephan Wolf, the outgoing CEO of GLEIF and Daniel Goldscheider, founder of DIF's liaison partner, the Open Wallet Foundation.

The panel was convened after Mr Jans decided to visit the host event, Digital Switzerland, which took place in Davos recently, at short notice.

A Swiss E-ID was proposed to voters but rejected in 2021 due to fears that citizens could be tracked during verification, and that private companies would collect and store their data in centralized databases. Subsequent revisions focused on addressing these issues, with Self Sovereign Identity (SSI) principles taking center stage. 

During the panel discussion, Kim pointed out that much of the unnecessary personal data sharing that typically accompanies customer onboarding is due to the lack of a strong digital identity. Conversely, digital identity based on open standards and SSI principles avoids data oversharing by design. It also offers powerful new economic opportunities, by providing the means to share a wide range of identity data in a privacy-preserving way. 

Kim highlighted the opportunity for “reusable identity”, tying it to improved onboarding enabled by decentralized identity standards, and described how Verifiable Credentials (VCs) provide an ‘envelope’ that can store messages ranging from government-backed Identity data to competencies or skills certifications. 

“Once relying parties trust that the envelope securely wraps and conveys one type of message, it opens the door for others, which offer some of the most exciting, potentially transformative signifiers of trust," she said after the event.

“Risks of AI were also an urgent topic of discussion. Indeed the waves of disinformation are getting bigger and coming faster as it becomes easier and cheaper to exploit the possibilities of Large Language Models (LLMs) and deepfakes. Decentralized identity architectures enable harnessing the benefits of AI but with a stronger, more trustworthy foundation. The standards were designed to apply to Non Person Entities (NPEs) as well as natural persons. So for example, Decentralized Identifiers (DIDs) and Verifiable Credentials (VC) can establish a chain of trust demonstrating an AI agent is acting on behalf of a natural person.”

DIF Hackathon Winners share their insights and experiences

DIF caught up with Ken Watanbe, whose team's submission for the DIF Hackathon scooped first place in the TBD (Block) and Trinsic sponsor challenges, as well as winning second place in the main DIF prize pool.

You can read about the team's innovative use of Decentralized Identifiers (DIDs), Verifiable Credentials (VCs) and Decentralized Web Nodes (DWNs) to enable a real-world use case within their university premises here.

We also spoke to Harsh Tyagi, whose team developed HealthX Protocol, winning third place in the DIF Hackathon main prize pool, as well as third place in the TBD (Block) sponsored challenge.

Read the interview with Harsh here.

Finally, Edward Curran told us about his path to the Hackathon, how he found developing TrustBox using Veramo and his experience of participating in the DIF community. Check it out here.

🗣️ DIF Member Announcements

Multiparty Computation (MPC) is an exciting branch of cryptography with an important role to play in the future of cybersecurity.

DIF caught up with Jay Prakash of DIF member Silence Laboratories, who explained how customers are using their MPC protocol to improve wallet security and enable privacy-preserving data collaboration.

Check out Jay's guest blog here.

New Member Orientation

Our Senior Director of Community Engagement, Limari Navarrete, led a New Member Orientation today.

Subscribe to DIF’s eventbrite for upcoming notifications on future orientations and events, here.

🆔 Join DIF!

If you would like to get in touch with us or become a member of the DIF community, please visit our website.

Can't get enough of DIF?
| Follow us on Twitter
| Join us on GitHub
| subscribe on YouTube
| read our DIF blog
| read the archives

2,837 words


DIF Hackathon Meet The Winners: Edward Curran

What led you to start working with Decentralized Identity?  I studied computer science at university and wrote my dissertation on Bitcoin, as well as developing using Hyperledger Fabric, so I’ve been in the Web3 world for a while. After uni I got involved in a research project

What led you to start working with Decentralized Identity? 

I studied computer science at university and wrote my dissertation on Bitcoin, as well as developing using Hyperledger Fabric, so I’ve been in the Web3 world for a while.

After uni I got involved in a research project looking into how the mortgage application process could be improved, as part of a knowledge transfer partnership between a university and a bank.

Applying for a mortgage is slow and painful, so streamlining it should make it better for customers and cheaper for the bank. The number of parties — including buyer and seller, lenders, conveyancers, estate agents, credit reference agencies and the Land Registry — made us think “This seems like a problem blockchain can help to solve”. 

However when we spoke to the parties involved, it became clear the core challenge is a data problem. There’s a need to securely exchange data about the property and the participants. That’s not easy, as a lot of the information is private.

All the guidance was, “Don’t put personal information on the blockchain”. So we searched around and learned about SSI (Self Sovereign Identity) and Verifiable Credentials (VCs). We visited the Rebuilding the Web of Trust (RWOT) conference and got excited. We thought, “This seems like the right way to do it”. 

At this point I realised I wanted to work in decentralized identity, so I moved to Berlin and worked at Jolocom. 

Tell us about TrustBox, please  

I’m not the only one who’s seen these problems with the property buying process. After I returned to the UK I got involved with the Property Data Trust Framework, which is a group of financial institutions, conveyancers and other industry participants working together to standardise data schemas. 

I came in to try to find a way to standardise the data exchange using VCs. To do that, you need to know who’s allowed to issue and verify certain things, so I started looking into the DIF Trust Establishment specification, which is when I saw the publicity for the DIF Hackathon. 

I thought “I can build something using Trust Establishment”. Initially I was looking at the organizational side. If each organization has a DID, they can recognize each other. I was less clear how it would work for buyers and sellers. DIDs are quite an abstract concept for people to get their heads around. 

People do most of their research on the web, and I liked the idea of mimicking the SSL padlock (the icon displayed by browsers when visiting a secure website) so it verifies a site is part of the Property Data Trust Framework. So we developed a browser extension called TrustSight, building on existing work around DID configurations (used to cryptographically link a DID to a domain).

We also built a tool for deploying trust frameworks, and a tool to visualize trust relationships.

Why did you choose to develop using Veramo? 

It’s tricky maintaining all these DIDs and DID configurations. There's a package by Sphereon around DID configurations, but it doesn’t provide any tooling around issuing or verifying credentials.

I was struggling to get the browser extension to work with JSON LD libraries, so I decided to use Veramo for DID and VC related operations and connect to the Sphereon resolver. 

It’s made my life much easier, particularly if I want to use a new DID method or different VC formats.

What user benefits are you targeting? 

As a buyer, ideally you’d apply to a hundred different lenders in order to get the best deal. However, it’s currently too expensive for lenders to do the checks. As a result, each buyer can really only apply for one mortgage, so people are very conservative about what they are looking for, to ensure they secure a mortgage.

We’re aiming to make the mortgage application process quicker and easier, and to ensure people get the right product for them. 

How was your experience of participating in the Hackathon? 

It was very well-organised. The criteria and timelines were all clear. I really enjoyed the Discord server. Some people were very active there, which made it feel like a community. I also liked the talks that were put on.

It was great to get to the end and see everyone’s submissions, and to feel connected and part of the community. 

What next for TrustBox? 

There is some tooling on the Trust Framework side that doesn’t yet exist, so I’d like to work with DIF on that. 

Downloading a browser extension is still quite a big piece of user friction, ultimately you want to get into the browser itself. That’s one to discuss with the browser companies!


OpenID

SIDI Hub Announces Roadmap to Drive Cross-Border Interoperability for Digital Identity

The OpenID Foundation is proud to be a founding member of the Sustainable and Interoperable Digital Identity (SIDI) Hub. Interoperability is crucial for a fair and inclusive digital society. By coordinating the digital identity activities already underway, and defining a governance structure for digital identity credentials, the SIDI Hub is helping accelerate the path to […] The post SIDI Hub An

The OpenID Foundation is proud to be a founding member of the Sustainable and Interoperable Digital Identity (SIDI) Hub.

Interoperability is crucial for a fair and inclusive digital society. By coordinating the digital identity activities already underway, and defining a governance structure for digital identity credentials, the SIDI Hub is helping accelerate the path to cross-border interoperability.  

Following the success of our first summit at TRUSTECH 2023, 90% of participants agreed that our work must continue in 2024.

In response, the SIDI Hub has defined its workstreams and roadmap for 2024:

Identifying champion use cases for cross-border interoperability that serve as baseline for all workstreams

Defining minimum interoperability requirements for priority use cases

Mapping trust frameworks across jurisdictions

Defining metrics of success 

Read our official announcement to learn more: https://sidi-hub.community/2024/01/30/digital-identity-community-unites-to-drive-cross-border-interoperability/ 

We are hosting a series of in-person meetings this year to progress the roadmap and invite all organizations involved in the development of digital identity solutions to get involved.

Visit the SIDI Hub website: https://lnkd.in/erWmZTCj
Join the official LinkedIn Group: https://lnkd.in/ecqtM5ry
Sign up to the newsletter: https://bit.ly/47Ul29j

 


OpenID Foundation

The OpenID Foundation (OIDF) is a global open standards body committed to helping people assert their identity wherever they choose. Founded in 2007, we are a community of technical experts leading the creation of open identity standards that are secure, interoperable, and privacy preserving. The Foundation’s OpenID Connect standard is now used by billions of people across millions of applications. In the last five years, the Financial Grade API has become the standard of choice for Open Banking and Open Data implementations, allowing people to access and share data across entities. Today, the OpenID Foundation’s standards are the connective tissue to enable people to assert their identity and access their data at scale, the scale of the internet, enabling “networks of networks” to interoperate globally. Individuals, companies, governments and non-profits are encouraged to join or participate.

Find out more at openid.net.

The post SIDI Hub Announces Roadmap to Drive Cross-Border Interoperability for Digital Identity first appeared on OpenID Foundation.


Human Colossus Foundation

Unraveling the Path to Genuine Semantic Interoperability across Digital Systems - Part 2

Delving into the integration of stemmatic traceability with directed acyclic graphs (DAGs) and kinematical mechanics reveals advanced strategies for enhancing data integrity and fortifying the foundations of semantic interoperability in the digital era.
Part 2: Stemmatic Traceability Further Explorations

Building on the foundational exploration of semantic interoperability in Part 1, we delve deeper into the innovative fusion of traditional methodologies with modern computational models. Previously, we discussed the pivotal roles of decentralized semantics and Overlays Capture Architecture (OCA) in enabling semantic interoperability between data models and data representation formats across varied environments. We also explored how morphological and epistemological semantics enhance our understanding of data, setting the stage for preserving meaning and context across digital platforms.

In this installment, we introduce "stemmatic traceability," marrying the ancient discipline of Stemmatics, focusing on tracing textual variations and origins, to contemporary event provenance models like Directed Acyclic Graphs (DAGs). This synergy enhances data integrity mechanisms and seamlessly integrates classical textual analysis with cutting-edge event provenance practices. Through this exploration, we aim to demonstrate how the principles of Stemmatics, previously confined to textual criticism, enhance semantic interoperability by offering innovative ways to track the evolution of digital objects.

The Convergence of Stemmatics and Directed Acyclic Graphs (DAGs) for Enhanced Data Integrity

In the dynamic landscape of digital content and data object evolution, 'Stemma' [1] emerges as a versatile umbrella term, encompassing diverse tree structures representing the evolution of digital objects. Traditionally linked to textual criticism, 'Stemma' transcends its origins, mirroring the characteristics of Directed Acyclic Graphs (DAGs), a robust model for version control systems, and more. It is a unifying genealogical tree encompassing trackable events in depicting the evolution of digital content.

Converging advanced data structures and traditional textual analysis methodologies in data management is profound and strategic. One of the exemplary intersections is the alignment between DAGs and the time-honored principles of Stemmatics. This synergy unveils a dynamic landscape where event provenance models enable tracing the evolution of systemic objects, enhancing data integrity and reliability across diverse computational ecosystems.

"Stemmatics" is a discipline within textual criticism that involves studying and analyzing the relationships among various copies of a text to reconstruct the original or an earlier form of that text. It seeks to trace and depict the transmission history and ancestral relationships of different versions of a manuscript or text.

Figure 1. A Stemma example for 'De nuptiis Philologiae et Mercurii' by Martianus Capella, as proposed by Danuta Shanzer [2].

A more precise understanding of data lineage and text variation between the ordered, hierarchical structuring of texts in Stemmatics and the nodal representation of data within DAGs demonstrate how these causal models encapsulate complex, multifaceted data.

Mirroring 'Stemmatics,' which focuses on tracing texts back to their original form or archetype by unraveling their complex, layered evolutions, DAGs enable a similar journey for data objects. Every node, representing a distinct event or data state, is a stepping stone that leads back to the root (i.e., the source node) – the initial event. All other nodes (events) are causally or sequentially linked, directly or indirectly, to the source node without cycles. Each edge in the DAG signifies a direct influence or connection between events, tracing back to the initial event as the origin. Traceable tree-like structures bring transparency to the fields of data evolution and data integrity. In a digital world where data is as fluid as it is expansive, such a structured approach is instrumental in mitigating data corruption, loss, or misinterpretation.

Integrating Kinematical Mechanics and Data Stemmatics

Marking the convergence of classical and modern computational paradigms, the role of structures like DAGs, imbued with the essence of principles like Stemmatics, serve as pillars of data integrity, ensuring that as data traverses the complex pathways of digital systems, its essence, authenticity, and assurance remain intact and enriched, offering a foundation for causal representation.

Introducing "kinematical mechanics" [3] into our discussion, we delve into how this discipline optimizes event pathways and interactions within digital environments. Kinematical mechanics contributes to authentic data provenance and system performance by employing motion and event sequencing concepts, enhancing workflow optimization and data processing. Integrating morphological semantics and kinematical mechanics lays the groundwork for data stemmatics, offering a comprehensive framework for representing and understanding the sequence and patterns in data evolution.

Kinematical Mechanics: In data-centric design, kinematical mechanics analyzes and optimizes event pathways and interactions within digital environments. This discipline employs the study of motion sequences and patterns to enhance the understanding and organization of event sequences, crucially contributing to authentic data provenance and improved system performance. Its importance in workflow optimization enables computational task scheduling and data processing pipelines. Understanding the sequences of events and their causality is fundamental to achieving system efficiency and optimal performance.

Example: In data analysis, 'Kinematical Mechanics' investigates the sequence and patterns of specific events, such as data updates or user interactions, and their impact on the system's behavior within a defined framework.

Morphological semantics and kinematical mechanics form the basis for data stemmatics, offering traceable genealogical structures to represent causal relationships between tangible 'objects' and recorded 'events,' providing a comprehensive understanding of data evolution.

Figure 2. Visualizing the Intersection of Objects and Events in Data Stemmatics.

Data Stemmatics: Data stemmatics explores the causal relationships behind data evolution, utilizing traceable graph structures with root archetypes to depict genealogical hypotheses about data relationships driven by content and historical context. It identifies the causes of data changes and offers insights into data evolution across domains. Data stemmatics is concerned with objects and events, delving into the cause of data modifications.

Data stemmatics clarifies the lineage of data changes and provides deeper insights into data evolution across various domains, thus enhancing our ability to achieve genuine semantic interoperability.

OCA and DAGs: A Synergetic Combination for Stemmatic Traceability

Stemmatic traceability, rooted in textual criticism and historical data analysis, is crucial in tracing data origins, transformations, and evolutionary paths. This method goes beyond mere nodal relationships to offer a nuanced understanding of data's evolutionary journey, thereby significantly improving our capacity for semantic interoperability.

Enhancements offered by the integration of OCA and DAGs include:

Precision in Data Lineage: The structural organization provided by OCA, coupled with the causal pathways rendered by DAGs, ensures the integrity of data structures and facilitates a transparent, unambiguous tracing of their historical evolution and transformations.

Enhanced Data Interpretability: Leveraging DAGs within the OCA framework transforms each data object's trajectory into a narrative that is both coherent and intuitively understandable. This clarity proves invaluable in scenarios where deciphering the evolution and provenance of data is critical.

Robustness Against Data Corruption: DAGs' acyclic nature inherently safeguards against data corruption and cyclic errors. Combined with OCA's structured framework, this resilience constructs a formidable defense mechanism for maintaining data integrity.

Scalability and Flexibility: Engineered with scalability at their core, OCA and DAGs adeptly navigate the complexities of expanding data landscapes. This synergistic blend ensures data integrity and traceability maintenance without compromising performance or adaptability.

Example: Consider a healthcare data ecosystem where patient records evolve. OCA organizes and structures this data while DAGs meticulously track every alteration, from initial diagnosis to treatment outcomes, ensuring a transparent, error-free historical record.

Conclusion

Data Integrity and Traceability: While using DAG technology to integrate Stemmatics principles, DAGs are a natural outcome of OCA's design, not a standalone feature. They are one of a few tools OCA uses to ensure data integrity, contributing to better data transparency and traceability and highlighting OCA's adaptability and versatility in handling data.

The integration of OCA with DAGs represents a synergistic relationship in data science, advancing stemmatic traceability by combining OCA's structural framework with the tractual precision of DAGs. This marriage ensures the tractual causality of data lineage and reinforces data integrity, making it a cornerstone of modern data management and the evolution of data records.

As we navigate the complexities of modern distributed data ecosystems, OCA and DAG technologies underscore a promising horizon for semantic interoperability and data integrity. The exploration into stemmatic traceability and its integration with contemporary technological frameworks marks a significant milestone in the modern digital landscape and the ongoing journey towards a future where data is not only abundant and accessible but also enriched with a clear, traceable lineage.

That concludes Part 2 of this two-part series on genuine semantic interoperability across digital systems, where we have explored the advanced concepts of stemmatic traceability and its integration with contemporary computational models. Our exploration delved into how the ancient discipline of Stemmatics, complemented by Directed Acyclic Graphs (DAGs) and kinematical mechanics, significantly enhances our understanding of data evolution and data integrity, thereby reflecting our ongoing commitment to deepening the dialogue around semantic interoperability.

For those who found these insights enlightening and wish to explore the foundational aspects of this topic, we highly recommend revisiting Part 1 of the series on "Semantic Interoperability," where we examined the crucial roles of decentralized semantics and Overlays Capture Architecture (OCA) in facilitating data harmonization across diverse platforms. We delved into the intricate dynamics of morphological and epistemological semantics and their critical contributions to the semantic interoperability framework. Part 1 sets the stage for understanding how to achieve seamless data exchange, challenging traditional notions, and offering innovative solutions for ensuring data models carry consistent and meaningful interpretations across varied systems and platforms.

Revisiting Part 1 will provide a comprehensive backdrop to the advanced discussions presented here, offering a holistic view of achieving genuine semantic interoperability in our increasingly interconnected digital world.

Link to Genuine Semantic Interoperability across Digital Systems - Part 1: Semantic Interoperability

Stay tuned for more insightful discussions as we continue to unravel the complexities and innovations in data science and interoperability.

References

[1] Parvum Lexicon Stemmatologicum. Stemma (Stemmatology). Department of Greek and Latin Philology, University of Zurich (UZH). Retrieved from https://www.sglp.uzh.ch/static/MLS/stemmatology/Stemma_229149940.html

[2] Shanzer, D. (1986). Review Article: Felix Capella: Minus sensus Quam Nominis Pecudalis [Review of Martianus Capella: “De Nuptiis Philologiae et Mercurii,” by J. Willis]. Classical Philology, 81(1), 62–81. http://www.jstor.org/stable/269880 

[3] Zhijiang Du, Wenlong Yang, Wei Dong, Kinematics modeling and performance optimization of a kinematic-mechanics coupled continuum manipulator, Mechatronics, Volume 31, 2015, Pages 196-204, ISSN 0957-4158, https://doi.org/10.1016/j.mechatronics.2015.09.001


DIF Blog

DIF Hackathon Meet The Winners: Harsh Tyagi

Please tell us about yourself and how you got involved in the Hackathon I’m currently in my final year at university. I’ve been building apps for the past two years and have recently been learning about cryptography, but I hadn’t previously built anything with

Please tell us about yourself and how you got involved in the Hackathon

I’m currently in my final year at university. I’ve been building apps for the past two years and have recently been learning about cryptography, but I hadn’t previously built anything with Decentralized Identifiers (DIDs).

I heard about DIF through an email from Devpost with information about the Hackathon. I started reading recent DIF announcements and looking into the specifications. 

A lot of the concepts were new to me. It made me rethink how the internet can be delivered! 

What motivated you to build something in the area of personal health data? 

Another team member, who has health problems in her family, came up with the idea for HealthX Protocol. Her family has to take a big bunch of files with them to every medical appointment. There’s no way for healthcare providers to filter or compute on the data. Everything is manual and takes a lot of time and resources to manage. 

Privacy is obviously really important when it comes to personal health data, but it’s not enough. Given the current state of cyber attacks, you have no idea who has your data, whether it’s in the cloud or whatever. So, the ability to own your health data was a critical requirement for us. 

The other part is, you don’t only need to store the data, you also need to be able to share it with those who need it.

What did you learn during the hackathon, and how did you use it to meet these needs? 

The hackathon introduced me to DWNs and Web5. Before, I was into other stacks and protocols: Ethereum, zero knowledge and DeFi (Decentralized Finance). I wasn’t actively building identity into my applications. 

These past two months, I’ve been fully immersed in decentralized identity. 

With Decentralized Web Nodes (DWNs), owning your data is simple. You spin up a basic server, which can be in the cloud, on your smartphone, even in your browser. You can store the data there and send it to another DID, which can see or edit the full data, or a certain portion of it.

You encrypt your data with your DID. It’s not like Google has the keys. Even if it’s in the cloud, it’s yours. 

Somehow everything fell into place. I thought “this can be a great use case”. 

How have decentralized identity and Web5 changed how you think about app development? 

The most important thing is being able to store, own and share data. You have self-custody with web3, but it’s just the private key. This enables decentralized money, but what if I need to build an application that generates a large amount of data? I can’t put the file on Ethereum. Even with systems like Filecoin and IPFS (InterPlanetary File System), ownership is still a big issue. 

Something that connects your identifier to your storage and enables you to give access to other identifiers opens the door to a lot of new applications. Ownership of data is lacking in web3, and web5 fills the gap. 

Another consideration is that in web3, identity is anonymous. To build identity into an app, you need to write a smart contract. Proving your identity becomes really easy when you’re using DIDs and Verifiable Credentials (VCs). You can choose what information to share using Selective Disclosure. It’s all there out of the box.

VCs can also be used for access control to a DWN. For example, if you have a conference, anyone with a pass can get access to the materials. The old way of access control is using a username and password. The new way is you can give access to anyone with a VC.

What next for HealthX Protocol? 

The application is currently just a prototype. We want to build something much more polished. 

At this point, generating the DIDs is a little difficult. To address that, I’m planning to integrate a digital identity wallet, perhaps even build one. Then we can do a production application. 

I’d also like to build other applications and get more involved in the decentralized identity community. 

I have a lot of ideas and I’m just getting started in this space!

Tuesday, 30. January 2024

DIF Blog

DIF Hackathon Meet The Winners: Ken Watanabe

Please introduce yourself and tell us the background to your project, "Anonymous Door Unlocking with Anonymity Revocation" My name is Ken Watanabe. I’m studying cryptography at Waseda University under Kazue Sako. My current research focus is on use cases for Verifiable Credentials (VCs), as well as


Please introduce yourself and tell us the background to your project, "Anonymous Door Unlocking with Anonymity Revocation"

My name is Ken Watanabe. I’m studying cryptography at Waseda University under Kazue Sako. My current research focus is on use cases for Verifiable Credentials (VCs), as well as signature schemes such as BBS+. 

The first time I used VCs was during a national project for the Japanese government, where we used VCs to authenticate UAVs (drones) delivering packages from A to B. 

For the hackathon, we were looking for a simple use case we could implement in our day-to-day environment. We work in a lab, so I decided to make a physical door unlocking system using 3D printers, that enables us to unlock the door to the lab using VCs. 

Please can you describe your solution? 

In our solution the university is the issuer, students are the holders and the door unlocking application is the verifier. 

We also introduced a new role in the ecosystem, ‘Opener’. Only the Opener can revoke a holder’s anonymity. The holder and verifier agree who will be the Opener during the setup process. 

The key technical components are Decentralized Identifiers (DIDs), W3C JSON-LD Verifiable Credentials and Decentralized Web Nodes (DWNs). 

We chose DWNs as they offer both storage and messaging. We realized we could put our VC into a DWN and deliver it to other entities like doors using the TBD messaging libraries. This met our requirements and made it simpler to develop. 

We made a wallet application that connects to the DWN, shows the list of VCs and presents the needed information to verifiers through a QR code. 

We used Dock network (which supports BBS+ signatures) and Arkworks to implement the crypto libraries, which I developed myself. 

Why was anonymity revocation an important feature?

We think Selective Disclosure is very important for many use cases, including this one. The holder shouldn’t have to share unnecessary attributes and the verifier shouldn’t need to hold sensitive data. 

But sometimes, data breaches, a theft or a physical accident might occur and the incident needs to be investigated. 

The Verifiable Presentations (VPs) generated by the application are stored on the lab’s Slack channel. This enables the lab manager to see when the door is opened in real time, without seeing who opened it. If something bad happens, you can pick the presentation from Slack and send it to the Opener to open. 

Please can you explain how the system preserves users’ privacy? 

The holder can choose which attributes to share and generate a Verifiable Presentation (VP) with just this information. They don’t need to share their name, only their faculty membership. 

We used BBS+ signatures because of the unlinkability feature, which means you can’t link multiple transactions to a single user, for enhanced privacy. BBS uses Zero Knowledge Proofs (ZKPs) to hide attributes. In this project we added another ZKP to the BBS signature, that we call “verifiable encryptions”. In our system, the holder encrypts his identifier using the Opener’s public key. The extra ZKP means the verifier can verify this has happened. 

We think it’s our main contribution to these kinds of door unlocking systems. 

What’s next for the application? 

Only four faculty members use the system currently. I want others to be able to use it. To do this we need to issue VCs to more students. I also want to introduce the system to other doors within the university. But I know it will take time to get it into production, as we need to verify it works well. 

Only one credential can be used in the application currently, but in future we want to make VPs from multiple VCs. I also want to introduce the ability for the holder and verifier to negotiate the Opener dynamically.

What other use cases do you envisage? 

This is a research project but we think the system can apply to other scenarios. One is ride hailing apps such as Uber and Lyft. The customer can order a taxi anonymously but if an accident happens, their anonymity can be revoked. Another is anonymous social networks. Users can chat anonymously, but if there’s a message that’s violent or abusive the author’s anonymity could be revoked. 

Apart from VCs, my other interest is I'm a cryptographer, so I also want to make completely different scenarios. I want to keep using it.

Last week I presented the application at SCIS2024 in Nagasaki, the biggest cryptography conference in Japan. Many people from government, academia and industry were there. I hope it generated a lot of interest. 

How did you find the experience of participating in the hackathon? Do you envisage participating in DIF going forwards? 

I really enjoyed the hackathon and am very honored to receive this prize. It was my first time using Decentralized Web Nodes. I found the DWNs GitHub readme page and API documentation very easy to read. I just followed the intro and found I could implement it easily. It was straightforward to store the VCs using DWNs. 

For me, BBS Signatures is an interesting area to explore further. I’d also like to use DWNs in other projects and, if possible, I would like to add some features to it. 


Velocity Network

Gianna Pfifner joins Velocity’s board

Congratulations to Gianna Pfiffner, who has been voted on to the Velocity Network Foundation Board. The post Gianna Pfifner joins Velocity’s board appeared first on Velocity.

Monday, 29. January 2024

FIDO Alliance

SRF: Change your Password Day: New standard passkey: Are passwords soon a thing of the past?

Passwords are as old as computers – but are still considered insecure and cumbersome. But now there is hope: “Passkey” is the name of a new procedure in which you […]

Passwords are as old as computers – but are still considered insecure and cumbersome. But now there is hope: “Passkey” is the name of a new procedure in which you don’t have to remember passwords or type in codes – and it’s still more secure. Behind it is FIDO, an alliance of large IT companies. SRF digital editor Peter Buchmann explains what it’s about.


IdentityWeek: Mastercard: 80% of data breaches linked to passwords

Mastercard is modernising digital interactions underpinned by biometric and AI-powered tools which provide less friction than endless password authentication. The company has joined the FIDO Alliance that calls for  encrypted […]

Mastercard is modernising digital interactions underpinned by biometric and AI-powered tools which provide less friction than endless password authentication. The company has joined the FIDO Alliance that calls for  encrypted passkey solutions and specifications to end passwords. The Mastercard Biometric Authentication Service provides a FIDO certified passwordless authentication that allows users to verify their identity using biometrics.


PCMag: X Now Supports Passkey Login on iOS

X (previously known as Twitter) will now let its users login with a passkey instead of a password – but only on iOS devices. X announced its intentions to adopt the passwordless […]

X (previously known as Twitter) will now let its users login with a passkey instead of a password – but only on iOS devices. X announced its intentions to adopt the passwordless technology a while back, and now it has launched the feature for iPhone users. It allows for a quicker way to login, only requiring users to authenticate with whatever they use to lock their device, such as their fingerprint, FaceID, or PIN. 


TechCrunch: X adds support for passkeys on iOS after removing SMS 2FA support last year

X, formerly known as Twitter, has introduced support for passkeys, a secure login method for U.S. users on iOS devices. This implementation follows the removal of SMS 2FA support for […]

X, formerly known as Twitter, has introduced support for passkeys, a secure login method for U.S. users on iOS devices. This implementation follows the removal of SMS 2FA support for non-paying, which was criticized for reducing overall security last year.

Sunday, 28. January 2024

Project VRM

An Approach to Paying for Everything That’s Free

Now that we’ve hit peak subscription, and paywalls are showing up in front of formerly free digital goods (requiring, of course, more subscriptions), perhaps the world is ready for EmanciPay, an idea that has been biding its time on our wiki since 2009. So, rather than leave it buried there, we’ll surface it here. Dig::: […]

Prompt: “A public marketplace for digital goods where people pay whatever they please for everything they consume.” Via Microsoft Image Creator

Now that we’ve hit peak subscription, and paywalls are showing up in front of formerly free digital goods (requiring, of course, more subscriptions), perhaps the world is ready for EmanciPay, an idea that has been biding its time on our wiki since 2009.

So, rather than leave it buried there, we’ll surface it here. Dig:::

Overview

Simply put, Emancipay makes it easy for anybody to pay (or offer to pay) —

as much as they like however they like for whatever they like on their own terms

— or at least to start with that full set of options, and to work out differences with sellers easily and with minimal friction.

Emancipay turns consumers (aka users) into customers by giving them a pricing gun (something which in the past only sellers used) and their own means to make offers, to pay outright, and to escrow the intention to pay when price and other requirements are met. And to be able to do this at scale across all sellers, much as cash, browsers, credit cards, and email clients do the same. Payments themselves can also be escrowed.

In slightly more technical terms, EmanciPay is a payment framework for customers operating with full agency in the open marketplace, and at scale. It operates on open protocols and standards, so it can be used by any buyer, seller or intermediary.

It was conceived as a way to pay for music, journalism, or what any artist brings into the world. But it can apply to anything. For example, [subscriptions], have become a giant fecosystem in which every seller has separate and non-substitutable scale across all subscribers, while subscribers have zero scale across all sellers, with the highly conditional exceptions of silo’d commercial intermediaries. As [Customer Commons] puts it,

There’s also not much help coming from the subscription management services we have on our side: Truebill, Bobby, Money Dashboard, Mint, Subscript Me, BillTracker Pro, Trim, Subby, Card Due, Sift, SubMan, and Subscript Me. Nor from the subscription management systems offered by Paypal, Amazon, Apple or Google (e.g. with Google Sheets and Google Doc templates). All of them are too narrow, too closed and exclusive, too exposed to the surveillance imperatives of corporate giants, and too vested in the status quo.

That status quo sucks (see here, or just look up “subscription hell”), and it’s way past time to unscrew it.) But how?

The better question is where?

The answer to that is on our side: the customer’s side.

While EmanciPay was first conceived by ProjectVRM as a way to make live payments to nonprofits and to provide a new monetization method for publishers. it also works as a counterpart to sellers’ subscription systems in what Zuora (a supplier of subscription management systems to the publishing industry, including The Guardian and Financial Times) calls the “subscription economy“, which it says “is built on ever-changing relationships with your customers”. Since relationships are two-way by nature, EmanciPay is one way that customers can manage their end, while publisher-side systems such as Zuora’s manage the other.

Emancipay economic case

EmanciPay provides a new form of economic signaling not available to individuals, either on the Net or before the Net became available as a communications medium. EmanciPay will use open standards and be comprised of open-source code. While any commercial fourth parties can use EmanciPay (or its principles, or any parts of it they like), EmanciPay’s open and standard framework will support fourth parties by making them substitutable, much as the open standards of email (SMTP, POP3, IMAP) make email systems substitutable. (Each has what Joe Andrieu calls service endpoint portability.)

EmanciPay is an instrument of customer independence from all of the billion (or so) commercial entities on the Net, each with its own arcane and siloed systems for engaging and managing customer relations, as well as receipt, acknowledgment, and accounting for payments from customers.

Use Case Background

EmanciPay was conceived originally as a way to provide customers with the means to signal interest and the ability to pay for media and creative works (most of which are freely available on the Web, if not always free of charge). Through EmanciPay, demand and supply can relate, converse, and transact business on mutually beneficial terms, rather than only on terms provided by the countless different siloed systems we have today, each serving to hold the customer captive, and causing much inconvenience and friction in the process.

Media goods were chosen for five reasons: 1) because most are available for free, even if they cost money, or are behind paywalls 2) paywalls, which are cookie-based, cannot relate to individuals as anything other than submissive and dependent parties (and each browser a users employs carries a different set of cookies) 3) both media companies and non-profits are constantly looking for new sources of revenue 4) the subscription model, while it creates steady income and other conveniences for sellers, is often a bad deal for customers, and is now so overused (see Subscriptification) that the world is approaching a peak subscription crisis, and unscrewing it can only happen from the customer’s side (because the business is incapable of unscrewing the problem itself 5) all methods of intermediating payment choices are either siloed by the seller or siloed by intermediators, discouraging participation by individuals.

What the marketplace requires are new business and social contracts that ease payment and stigmatize non-payment for creative goods. The friction involved in voluntary payment is still high, even on the Web, where one must go through complex ceremonies even to make simple payments. There is no common and easy way either to keep track of what media (free or otherwise) we use (see Media Logging), to determine what it might be worth, and to pay for it easily and in standard ways &#151; to many different suppliers. (Again, each supplier has its own system for accepting payments.)

EmanciPay differs from other payment models (subscriptions, newsstands, tip jars) by providing customers with the ability to choose what they wish to pay and how they’ll pay it, with minimum friction — and with full choice about what they disclose about themselves.

EmanciPay will also support credit for referrals, requests for service, feedback, and other relationship support mechanisms, all at the control of the user. For example, EmanciPay can provide quick and easy ways for listeners to pay for public radio broadcasts or podcasts, for readers to pay for otherwise “free” papers or blogs, for listeners to pay to hear music and support artists, for users to issue promises of payment for stories or programs — all without requiring the individual to disclose unnecessary private information or to become a “member” — although these options are kept open.

This will scaffold genuine relationships between buyers and sellers in the media marketplace. It will also give deeper meaning to “membership” in non-profits. (Under the current system, “membership” generally means putting one’s name on a pitch list for future contributions, and not much more than that.)

EmanciPay will also connect the sellers’ CRM (Customer Relationship Management) systems with customers’ VRM (Vendor Relationship Management) systems, supporting rich and participatory two-way relationships. In fact, EmanciPay will by definition be a VRM system.

Micro-accounting and Macro-distribution

The idea of “micro-payments” for goods on the Net has been around for a long time and is often brought up as a potential business model for journalism. For example in this article by Walter Isaacson in Time Magazine. It hasn’t happened, at least not globally, because it’s too complicated, and in prototype only works inside private silos.

What ProjectVRM suggests instead is something we don’t yet have, but very much need:

micro-accounting for actual uses. Think of this simply as “keeping track of” the news, podcasts, newsletters, or music we consume. macro-distribution of payments for accumulated use (that’s no longer “micro”).

Much — maybe most — of the digital goods we consume are both free for the taking and worth more than $zero. How much more? We need to be able to say. In economic terms, demand needs to have a much wider range of signals it can give to supply. And give to each other, to better gauge what we should be willing to pay for free stuff that has real value but not a hard price.

As currently planned, EmanciPay would –

Provide a single and easy way for consumers of “content” to become customers of it. In the current system — which isn’t one — every artist, every musical group, and every public radio and TV station has his, her or own way of taking in contributions from those who appreciate the work. This can be arduous and time-consuming for everybody involved. (Imagine trying to pay separately every musical artist you like, for all your enjoyment of each artist’s work.) What EmanciPay proposes, however, is not a replacement for existing systems, but a new system that can supplement existing fund-raising systems — one that can soak up much of today’s MLOTT: Money Left On The Table. Provide ways for individuals to look back through their media usage histories, inform themselves about what they have been enjoying, and determine how much it is worth to them. The Copyright Arbitration Royalty Panel (CARP), and later the Copyright Royalty Board (CRB), both came up with “rates and terms that would have been negotiated in the marketplace between a willing buyer and a willing seller.” This almost absurd language first appeared in the 1995 Digital Performance Royalty Act (DPRA) and was tweaked in 1998 by the Digital Millennium Copyright Act (DMCA), under which both the CARP and the CRB operated. The rates they came up with peaked at $.0001 per “performance” (a song or recording), per listener. EmanciPay creates the “willing buyer” that the DPRA thought wouldn’t exist. Stigmatize non-payment for worthwhile media goods. This is where “social” will finally come to be something more than yet another tech buzzmodifier.

All these require micro-accounting, not micro-payments. Micro-accounting can inform ordinary payments that can be made in clever new ways that should satisfy everybody with an interest in seeing artists compensated fairly for their work. An individual listener, for example, can say “I want to pay 1¢ for every song I hear,” and “I’ll send SoundExchange a lump sum of all the pennies wish to pay for songs I have heard over a year, along with an accounting of what artists and songs I’ve listened to” — and leave dispersal of those totaled pennies up to the kind of agency that likes, and can be trusted, to do that kind of thing. That’s the macro-distribution part of the system.

Similar systems can also be put in place for readers of newspapers, blogs, and other journals. What’s important is that the control is in the hands of the individual and that the accounting and dispersal systems work the same way for everybody.

Friday, 26. January 2024

LionsGate Digital

The Declaration Of Digital Independence

Declaration Of Digital Independence Authored by Larry Sanger, Co-founder of Wikipedia We declare that we have unalienable digital rights, rights that define how information that we individually own may or may not be treated by others, and that among these rights are free speech, privacy, and security. Since the proprietary, centralized architecture of the Internet at present has induced most
Declaration Of Digital Independence Authored by Larry Sanger, Co-founder of Wikipedia

We declare that we have unalienable digital rights, rights that define how information that we individually own may or may not be treated by others, and that among these rights are free speech, privacy, and security. Since the proprietary, centralized architecture of the Internet at present has induced most of us to abandon these rights, however reluctantly or cynically, we ought to demand a new system that respects them properly.

The difficulty and divisiveness of wholesale reform means that this task is not to be undertaken lightly. For years we have approved of and even celebrated enterprise as it has profited from our communication and labor without compensation to us. But it has become abundantly clear more recently that a callous, secretive, controlling, and exploitative animus guides the centralized networks of the Internet and the corporations behind them.

The long train of abuses we have suffered makes it our right, even our duty, to replace the old networks. To show what train of abuses we have suffered at the hands of these giant corporations, let these facts be submitted to a candid world.

They have practiced in-house moderation in keeping with their executives’ notions of what will maximize profit, rather than allowing moderation to be performed more democratically and by random members of the community.

They have banned, shadow-banned, throttled, and demonetized both users and content based on political considerations, exercising their enormous corporate power to influence elections globally.

They have adopted algorithms for user feeds that highlight the most controversial content, making civic discussion more emotional and irrational and making it possible for foreign powers to exercise an unmerited influence on elections globally.

They have required agreement to terms of service that are impossible for ordinary users to understand, and which are objectionably vague in ways that permit them to legally defend their exploitative practices.

They have marketed private data to advertisers in ways that no one would specifically assent to.

They have failed to provide clear ways to opt out of such marketing schemes.

They have subjected users to such terms and surveillance even when users pay them for products and services.

They have data-mined user content and behavior in sophisticated and disturbing ways, learning sometimes more about their users than their users know about themselves; they have profited from this hidden but personal information.

They have avoided using strong, end-to-end encryption when users have a right to expect total privacy, in order to retain access to user data.

They have amassed stunning quantities of user data while failing to follow sound information security practices, such as encryption; they have inadvertently or deliberately opened that data to both illegal attacks and government surveillance.

They have unfairly blocked accounts, posts, and means of funding on political or religious grounds, preferring the loyalty of some users over others.

They have sometimes been too ready to cooperate with despotic governments that both control information and surveil their people.

They have failed to provide adequate and desirable options that users may use to guide their own experience of their services, preferring to manipulate users for profit.

They have failed to provide users adequate tools for searching their own content, forcing users rather to employ interfaces insultingly inadequate for the purpose.

They have exploited users and volunteers who freely contribute data to their sites, by making such data available to others only via paid application program interfaces and privacy-violating terms of service, failing to make such freely-contributed data free and open source, and disallowing users to anonymize their data and opt out easily.

They have failed to provide adequate tools, and sometimes any tools, to export user data in a common data standard.

They have created artificial silos for their own profit; they have failed to provide means to incorporate similar content, served from elsewhere, as part of their interface, forcing users to stay within their networks and cutting them off from family, friends, and associates who use other networks.

They have profited from the content and activity of users, often without sharing any of these profits with the users.

They have treated users arrogantly as a fungible resource to be exploited and controlled rather than being treated respectfully, as free, independent, and diverse partners.

We have begged and pleaded, complained, and resorted to the law. The executives of the corporations must be familiar with these common complaints; but they acknowledge them publicly only rarely and grudgingly. The ill treatment continues, showing that most of such executives are not fit stewards of the public trust.

The most reliable guarantee of our privacy, security, and free speech is not in the form of any enterprise, organization, or government, but instead in the free agreement among free individuals to use common standards and protocols. The vast power wielded by social networks of the early 21st century, putting our digital rights in serious jeopardy, demonstrates that we must engineer new—but old-fashioned—decentralized networks that make such clearly dangerous concentrations of power impossible.

Therefore, we declare our support of the following principles.

Principles of Decentralized Social Networks

We free individuals should be able to publish our data freely, without having to answer to any corporation.

We declare that we legally own our own data; we possess both legal and moral rights to control our own data.

Posts that appear on social networks should be able to be served, like email and blogs, from many independent services that we individually control, rather than from databases that corporations exclusively control or from any central repository.

Just as no one has the right to eavesdrop on private conversations in homes without extraordinarily good reasons, so also the privacy rights of users must be preserved against criminal, corporate, and governmental monitoring; therefore, for private content, the protocols must support strong, end-to-end encryption and other good privacy practices.

As is the case with the Internet domain name system, lists of available user feeds should be restricted by technical standards and protocols only, never according to user identity or content.

Social media applications should make available data input by the user, at the user’s sole discretion, to be distributed by all other publishers according to common, global standards and protocols, just as are email and blogs, with no publisher being privileged by the network above another. Applications with idiosyncratic standards violate their users’ digital rights.

Accordingly, social media applications should aggregate posts from multiple, independent data sources as determined by the user, and in an order determined by the user’s preferences.

No corporation, or small group of corporations, should control the standards and protocols of decentralized networks, nor should there be a single brand, owner, proprietary software, or Internet location associated with them, as that would constitute centralization.

Users should expect to be able to participate in the new networks, and to enjoy the rights above enumerated, without special technical skills. They should have very easy-to-use control over privacy, both fine- and coarse-grained, with the most private messages encrypted automatically, and using tools for controlling feeds and search results that are easy for non-technical people to use.

We hold that to embrace these principles is to return to the sounder and better practices of the earlier Internet and which were, after all, the foundation for the brilliant rise of the Internet. Anyone who opposes these principles opposes the Internet itself. Thus we pledge to code, design, and participate in newer and better networks that follow these principles, and to eschew the older, controlling, and soon to be outmoded networks.

We, therefore, the undersigned people of the Internet, do solemnly publish and declare that we will do all we can to create decentralized social networks; that as many of us as possible should distribute, discuss, and sign their names to this document; that we endorse the preceding statement of principles of decentralization; that we will judge social media companies by these principles; that we will demonstrate our solidarity to the cause by abandoning abusive networks if necessary; and that we, both users and developers, will advance the cause of a more decentralized Internet.

Sign the Petition at Change.org

The post The Declaration Of Digital Independence appeared first on Lions Gate Digital.

Thursday, 25. January 2024

Digital ID for Canadians

The Crucial Link Between Accessibility and Digital Identity

Author: Marie Jordan from VISA. Additional contributions made by members of DIACC’s Adoption Expert Committee. In the rapidly evolving landscape of the digital age, the…

Author: Marie Jordan from VISA. Additional contributions made by members of DIACC’s Adoption Expert Committee.

In the rapidly evolving landscape of the digital age, the concept of identity has transcended the physical realm and taken root in the digital world. This shift towards digital identities brings about numerous conveniences and efficiencies, but it also presents challenges: ensuring accessibility and equity for all. From online banking to social media profiles, our digital identity is an intricate tapestry that weaves together various facets of our lives. It’s crucial to note that when discussing inclusion, equity, and accessibility in this context, the focus is primarily on individuals who experience physical or cognitive disabilities that may impair their use of technology from the outset.

The importance of accessibility in creating digital identity solutions cannot be overstated. To achieve true inclusivity for this specific group, both the public and private sectors must prioritize accessibility and consider specific principles to safeguard the rights and privacy of individuals with disabilities. In this article, we’ll delve into the significance of accessibility for digital identity and the protection of marginalized communities, outlining key principles for both public and private sectors to consider.

Part 1: The Significance of Accessibility in Developing Digital Identity

Digital identity solutions are central to our modern lives, facilitating everything from accessing healthcare records to participating in online communities. However, these advantages are only fully realized when these systems are accessible to everyone, regardless of their physical or cognitive abilities, including accounting for aging populations. An initial product release that lacks accessibility and proves difficult to use, even if it functions as intended, can erode trust and create negative perceptions.

Universal design: A foundational principle for digital identity solutions is creating systems usable by all individuals, regardless of disability. A universally designed digital identity solution should accommodate a wide range of abilities, modalities of interaction, and preferences, ensuring that everyone can participate in the digital world on equal terms. Inclusivity in development: Involving individuals with disabilities in the design and testing phases ensures that the final product is genuinely accessible. By including diverse perspectives, developers can identify and rectify accessibility issues early in the development cycle. Adherence to standards: To ensure accessibility, digital identity solutions must adhere to globally recognized accessibility standards, such as W3C’s Web Content Accessibility Guidelines. These provide a clear set of guidelines for making digital content and applications accessible. Compliance with these standards is crucial for ensuring that digital identities are available and usable for all. User-centric approach: Developers must seek to understand how individuals with disabilities interact with their application or technology, offering customization options that empower users to adapt the system to their unique needs and requirements. This might include adjustable font sizes, alternative input methods, and compatibility with assistive technologies. They should also be adaptive in their design. Privacy and security: Paramount in digital identity solutions, individuals with disabilities may be particularly vulnerable to privacy breaches and identity theft. Implementing robust security measures while maintaining respect for user privacy is essential. This can be achieved through encryption, robust authentication methods, and clear privacy policies. Regular audits and assessments can address the security and privacy practices of digital identity solutions as technology shifts, including vulnerability testing and compliance checks to ensure the highest standards of privacy and security are maintained.

Part 2: Safeguarding the Privacy and Trust of Individuals with Disabilities

To ensure that the privacy and trust of all citizens are safeguarded appropriately, accessible solutions must be designed and delivered with intent. To ensure that accessibility is realized, a high level of understanding and education is necessary for individuals to utilize their identity in digital channels without the apprehension of misuse or fear of being exploited.

Informed consent: Individuals with disabilities should have access to clear and understandable information about how their digital identity data will be used. Obtaining informed consent ensures that users are aware of the risks and benefits of participating in digital identity systems. Minimal data collection: Users should understand that only the data that is absolutely necessary for the functioning of the digital identity system is being collected. Minimizing data collection reduces the risk of privacy breaches and limits the potential for misuse of personal information. Transparency in data practices: Transparency should be maintained in data practices. All users must have access to their data and understand how it is being used and processed. Transparency, particularly to historically marginalized communities, builds trust and empowers individuals to make informed decisions about the use of their digital identities. Accessible privacy settings and controls: Accessible privacy settings and controls that are easy for individuals with disabilities to use must be available. These controls must allow users to manage their data and privacy preferences effectively.

In conclusion, it’s important to recognize that accessibility, inclusion, and equity are multifaceted challenges. While this article focuses on individuals experiencing physical or cognitive disabilities, it’s crucial to acknowledge that there are various barriers to equitable access, including socio-economic factors, digital literacy, and language barriers. By addressing these challenges collectively, we can work towards creating a more inclusive digital world for everyone.


Velocity Network

Marc Jansen joins Velocity’s board

We're pleased to announce that Velocity Network Foundation members have voted Marc Jansen onto the Board of Directors. The post Marc Jansen joins Velocity’s board appeared first on Velocity.

The post Marc Jansen joins Velocity’s board appeared first on Velocity.


DIF Blog

Guest blog: Jay Prakash, Silence Laboratories

Founded in 2022, Silence Laboratories is a cybersecurity startup enabling adoption of privacy preserving threshold signatures and secure computations through its developer-focused cryptographic stack. The company also organizes Decompute, a conference focused on decentralized security with multiparty computation.  We spoke to CEO and co-founder Jay Prakash.  Please introduce

Founded in 2022, Silence Laboratories is a cybersecurity startup enabling adoption of privacy preserving threshold signatures and secure computations through its developer-focused cryptographic stack. The company also organizes Decompute, a conference focused on decentralized security with multiparty computation. 

We spoke to CEO and co-founder Jay Prakash. 

Please introduce yourself, and explain how you developed the idea for Silence Labs 

I did my PhD in Usable Security, that is security an average user is able to handle, hiding all the math and complexity. My PhD Supervisor and I found multiple vulnerabilities in existing Two-Factor Authentication (2FA) solutions, which we published and described at various conferences. We thought, “Why not build a company to do this better?”

During this period I spent time in both Singapore and the US. In the process of talking to prospective customers, we realized there was a bit of a mismatch between our original idea and the market need. However, we saw clear demand for decentralized authentication. 

We began meeting with crypto wallet providers. Many were talking about exposure of private keys, which is a common problem. That’s how we landed on Multiparty Computation (MPC) as an area with a lot of commercial potential. 

What are you building? 

We have developed an interactive protocol which allows a group of parties to do mathematical calculations on private data.

For example, the data could be keyshares by isolated computing nodes trying to calculate the signature for a transaction. The requirement is to produce a valid signature from a predetermined proportion of the nodes in the network, known as t out of n secret sharing. 

It’s a hot problem that we latched onto and started to develop around. 

We expose the protocol in our SDKs and libraries, which customers can use to distribute the signing process and overcome the problem of key exposure. 

How is your solution being used today? 

Our solution provides a good amount of freedom regarding what policies are set and how keyshares take place. We provide the tool, but don't dictate how it should be used. 

There are a couple of ways our partners are using it.

One is browser plugin wallets that split the private key between the user’s browser and their phone. 

Another design is to create a network which manages the keys on your behalf. You provide your ID, then the network runs a protocol (such as  5 out of 10 nodes) to get a valid signature. 

A third design is to do one keyshare from the phone and one from the wallet provider. If our customer is a custodian holding a large volume of assets, they can also split their key between multiple directors and/or employees. 

Are you targeting other market segments, in addition to crypto wallets?  

MPC is a powerful tool that can be used for many purposes.  We’ve been doing a lot of research and development around using it for privacy guarantees. For example, a number of financial institutions hold your financial data. If you now want to take out a loan, the lender needs access to your credit score. Traditionally, credit agencies scrape your data without you knowing and return a score. Your data passes through lots of hands, you have no control over what’s happening to it, it’s aggregated and vulnerable to attack. MPC can radically improve how this is done. 

Another use case is Reg Tech (regulatory technology) including Anti Money Laundering (AML) compliance. To uncover money laundering, you need to collaborate with lots of partners. For example, if I’m a telco and you’re a bank, we can both reduce our risk by computing on the customers’ combined telco and banking history. Reg Tech providers currently can’t share private data with each other, but with privacy guarantees, these protocols can comply fully with GDPR and other applicable regulations. 

We want to position this like Two-Factor Authentication, which is already well understood by consumers. The intention is that the user experience will be exactly the same. To deliver that, it has to work fast. Right now we have the fastest multiparty signing library in production, around 5 to 10x faster than other solutions. 

Can you unpack the concept of Privacy Guarantees a bit please? 

There’s a big misunderstanding around consent. Typically a service provider creates a super-long consent form and you tick to say you agree. What we are trying to champion is: One, the user interface should be clearer and Two, consent should not be one-time or one-directional. If I want to pull a piece of private data I previously provided, it should be removed from the entire ecosystem. 

To make consent programmable, you need something like Multiparty Computation. MPC allows you to build more powerful and user-centric applications by guaranteeing decentralization of the computation. 

In short, wherever multiple institutions have your data and want to collaborate without exposing your data to each other, that’s where our solution can help. 

What do you see as the value of participating in DIF? 

I heard about the hackathon through someone at DIF. I’ve been quite active on DIF’s Slack channel and hope to engage more formally soon. 

I see two opportunities for Silence Labs. One is collaboration with others focused on similar topics, for example through a DIF working group. The other is about driving awareness. There’s little inherent ‘pull’ for privacy from companies, as they believe it’s just about compliance. But multiple surveys show there are business benefits too. For example, one survey showed that banks offering privacy guarantees can provide twenty percent more loans with less overall risk. 

Last year we organized a conference, Decompute, where DIF was one of the partners. The event is happening again this year (in Singapore on 17 September) and we’re also interested in running an event in London. We see this as an opportunity to drive much more engagement from the decentralized identity community, as well as wider awareness beyond it. 

Wednesday, 24. January 2024

The Rubric

DIDs for Any Crypto (did:pkh, Part 2)

did:pkh is the minimalist multi-blockchain DID method, designed to work with any blockchain with minimal fuss. Today we talk with two of the authors–and implementers–of did:pkh, Wayne Chang and Joel Thorstensson.    References 3Box Labs https://3boxlabs.com/   Ceramic Network https://ceramic.network/  Chain Agnostic Improvement Proposals (CAIP) https://github.com/ChainAgnos
did:pkh is the minimalist multi-blockchain DID method, designed to work with any blockchain with minimal fuss. Today we talk with two of the authors–and implementers–of did:pkh, Wayne Chang and Joel Thorstensson.    References 3Box Labs https://3boxlabs.com/   Ceramic Network https://ceramic.network/  Chain Agnostic Improvement Proposals (CAIP) https://github.com/ChainAgnostic/CAIPs  Chain Agnostic Standards Alliance (CASA) https://github.com/ChainAgnostic/CASA   DID Directory https://diddirectory.com/  did:ens...

DIDs for Any Crypto (did:pkh, Part 1)

did:pkh is the minimalist multi-blockchain DID method, designed to work with any blockchain with minimal fuss. Today we talk with two of the authors–and implementers–of did:pkh, Wayne Chang and Joel Thorstensson.    References 3Box Labs https://3boxlabs.com/   Ceramic Network https://ceramic.network/  Chain Agnostic Improvement Proposals (CAIP) https://github.com/ChainAgnos
did:pkh is the minimalist multi-blockchain DID method, designed to work with any blockchain with minimal fuss. Today we talk with two of the authors–and implementers–of did:pkh, Wayne Chang and Joel Thorstensson.    References 3Box Labs https://3boxlabs.com/   Ceramic Network https://ceramic.network/  Chain Agnostic Improvement Proposals (CAIP) https://github.com/ChainAgnostic/CAIPs  Chain Agnostic Standards Alliance (CASA) https://github.com/ChainAgnostic/CASA   DID Directory https://diddirectory.com/  did:ens...

Velocity Network

Velocity’s Etan Bernstein features in Polygon webinar

Velocity's co-founder and Head of Ecosystem Etan Bernstein joins a panel hosted by Polygon ID on “The Future of Digital Identity: Identity Ecosystem”. The post Velocity’s Etan Bernstein features in Polygon webinar appeared first on Velocity.