Last Update 1:15 PM November 20, 2024 (UTC)

Company Feeds | Identosphere Blogcatcher

Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!

Wednesday, 20. November 2024

Spherical Cow Consulting

Rethinking Identity Management: The Role of Non-Human Identities in Academic Research

Academia is facing challenges in managing non-human identities (NHIs), which are essential for modern research systems but often treated like human users. As NHIs grow in complexity, issues like token sprawl, access management misalignments, and compliance difficulties arise, especially in collaborative environments like high-performance computing. Traditional directories fail to manage these iden

Academia has always been about pushing boundaries—whether in knowledge, technology, or collaboration. But as research grows more complex and reliant on technology, so too does the need to address a hidden layer of identity management. I’m talking about non-human identities (NHIs): those workloads, APIs, batch jobs, and software systems that work tirelessly behind the scenes. This is more than service accounts and bots. This is the underlying infrastructure for modern IT systems.

NHIs aren’t a new concept, but how we manage them today isn’t just outdated—it’s risky. Let’s dig in.

What Are NHIs?

Think about the processes that underpin research in a university. Automated data collection? That’s an NHI. Research simulations running on high-performance computing (HPC) systems? Also NHIs. APIs that manage sensitive student and research data? You guessed it—NHIs. These identities are everywhere, yet we still treat them like human users in many cases, with joiner/mover/leaver workflows and directory mappings.

And while this “fit them into the human box” approach might work on a small scale, it doesn’t secure the infrastructure they’re tied to. That’s a problem.

Why NHIs Are a Challenge

NHIs often inherit the same challenges as their human counterparts, only amplified by scale and complexity. Here’s a snapshot of the issues:

Token Sprawl: OAuth account tokens being passed around like candy at Halloween. (I feel like I need to make an analogy about cavities and decay, but I’ll just leave that here because iew.) Access Management: Misaligned permissions, often shared across workloads, create opportunities for breaches. Auditing and Compliance: Many HPC environments and collaborative research projects struggle to track what access NHIs have, much less prove compliance with regulations. Security Gaps: Relying on directories and manual processes doesn’t cut it when workloads operate across different systems and organizations.

A common example? Research collaboration in HPC environments. These systems often involve shared resources accessed by NHIs with wildly varying permissions. Without precise controls, compliance becomes a nightmare, and auditing feels like playing whack-a-mole with invisible targets.

Directories: The Bottleneck We Can’t Ignore

But wait! We have directories to keep everything organized! Won’t that help? (All my enterprise IAM friends just did a full-body cringe reading that.)

Here’s the thing about directories: they’re fantastic for managing human identities in traditional environments. But when it comes to NHIs, directories quickly become a bottleneck. Why? Because they assume every identity—human or non-human—can be neatly slotted into a joiner-mover-leaver model.

For NHIs, this model is fundamentally flawed:

No Natural Lifecycle: Workloads, APIs, and batch jobs don’t “move” or “leave” in the same way people do. They’re created and destroyed based on operational needs, often spinning up and down in milliseconds. A directory simply can’t keep pace with this churn. Token Dependency: OAuth tokens are often used as a workaround, passed around to grant temporary access. But this approach doesn’t scale—it’s prone to sprawl, lacks visibility, and creates security risks when tokens are misused or stolen. Lack of Context: Directories were designed for human-centric workflows, meaning they lack the context required to manage the nuanced relationships NHIs have with systems, resources, and data.

The result? Academic IAM systems often end up overburdened and unable to scale to the demands of modern, complex environments. Imagine trying to cram a sprawling HPC infrastructure into a directory originally built to manage faculty and students—it’s like forcing a square peg into a round hole.

The Role of DevOps, IT, and IAM Teams

Managing NHIs isn’t a one-team job—it’s a cross-functional effort. DevOps and IT teams usually own the operational infrastructure, while IAM teams handle policy enforcement. But these groups often speak different “languages,” making collaboration tricky.

That’s where standards and architecture frameworks come in. Efforts like the IETF’s WIMSE draft aim to create a shared understanding of how to secure NHIs in multi-system environments. It’s a step in the right direction, but adoption isn’t straightforward.

Building Better NHI Management

So, how can academia start tackling the NHI problem more effectively?

Establish Clear Ownership: Decide who is responsible for managing NHIs, from provisioning to decommissioning. Adopt Standards: Leverage frameworks like SPIFFE and WIMSE to create consistent, scalable trust models. Learn how to use the Shared Signals Framework and the Continuous Access Evaluation Profile (CAEP). Invest in Automation: Automate the boring stuff, like token issuance and revocation, to reduce human error. (Hot take: CAEP can help here, too.) Foster Collaboration: Create spaces for DevOps, IT, and IAM teams to align on priorities and processes. Looking Ahead

The future of NHIs in academia isn’t just about solving today’s problems—it’s about enabling the next generation of research. Imagine a world where workload identities are as dynamic as the systems they operate in, seamlessly supporting complex collaborations across institutions. Standards and open-source tools will be key to making that vision a reality.

But here’s the catch: it’s not just a technical challenge. NHIs require governance, funding, and attention from leadership to ensure they’re managed sustainably. Without these, even the best tools won’t fix the problem.

I’ll be talking about this at the 2024 Internet2 TechEx in Boston. If you’d like my slides, drop me a note on LinkedIn and I’ll be happy to share!

Reach out if you want to learn more about navigating this process or need support with standards development. With my experience across various SDOs, I’m here to help guide you through the complexities of Internet standards development.

The post Rethinking Identity Management: The Role of Non-Human Identities in Academic Research appeared first on Spherical Cow Consulting.


Dock

Dock is partnering with Socure to revolutionize digital identity verification

We're excited to share that Dock is partnering with Socure to revolutionize digital identity verification! Socure’s mission has always been clear: verify 100% of good identities in real-time and eliminate identity fraud. With over 2,600 customers across financial institutions, government agencies,

We're excited to share that Dock is partnering with Socure to revolutionize digital identity verification!

Socure’s mission has always been clear: verify 100% of good identities in real-time and eliminate identity fraud. With over 2,600 customers across financial institutions, government agencies, and leading enterprises, they’re proud to be the gold standard in digital identity verification.

Now, by teaming up we’re taking these capabilities to the next level. 

The partnership allows us to combine their AI-driven analytics with our decentralized identity infrastructure to offer a more flexible, secure, consumer-centric identity solution.

We’re thrilled about what’s ahead and can’t wait to see the innovative solutions we’ll build together. 

Stay tuned for more updates on how we’re redefining trust in the digital world!


Thales Group

Thales’s Friendly Hackers unit invents metamodel to detect AI-generated deepfake images

Thales’s Friendly Hackers unit invents metamodel to detect AI-generated deepfake images prezly Wed, 11/20/2024 - 09:00 As part of the challenge organised by France's Defence Innovation Agency (AID) to detect images created by today’s AI platforms, the teams at cortAIx, Thales’s AI accelerator, have developed a metamodel capable of detecting AI-generated deepfakes. The Thales met
Thales’s Friendly Hackers unit invents metamodel to detect AI-generated deepfake images prezly Wed, 11/20/2024 - 09:00 As part of the challenge organised by France's Defence Innovation Agency (AID) to detect images created by today’s AI platforms, the teams at cortAIx, Thales’s AI accelerator, have developed a metamodel capable of detecting AI-generated deepfakes. The Thales metamodel is built on an aggregation of models, each of which assigns an authenticity score to an image to determine whether it is real or fake. Artificially generated AI image, video and audio content is increasingly being used for the purposes of disinformation, manipulation and identity fraud.

Artificial intelligence is the central theme of this year’s European Cyber Week from 19-21 November in Rennes, Brittany. In a challenge organised to coincide with the event by France's Defence Innovation Agency (AID), Thales teams have successfully developed a metamodel for detecting AI-generated images. As the use of AI technologies gains traction, and at a time when disinformation is becoming increasingly prevalent in the media and impacting every sector of the economy, the deepfake detection metamodel offers a way to combat image manipulation in a wide range of use cases, such as the fight against identity fraud.

AI-generated images are created using AI platforms such as Midjourney, Dall-E and Firefly. Some studies have predicted that within a few years the use of deepfakes for identity theft and fraud could cause huge financial losses. Gartner has estimated that around 20% of cyberattacks in 2023 likely included deepfake content as part of disinformation and manipulation campaigns. Their report1 highlights the growing use of deepfakes in financial fraud and advanced phishing attacks.

“Thales’s deepfake detection metamodel addresses the problem of identity fraud and morphing techniques,”[1] said Christophe Meyer, Senior Expert in AI and CTO of cortAIx, Thales’s AI accelerator. “Aggregating multiple methods using neural networks, noise detection and spatial frequency analysis helps us better protect the growing number of solutions requiring biometric identity checks. This is a remarkable technological advance and a testament to the expertise of Thales’s AI researchers.”

The Thales metamodel uses machine learning techniques, decision trees and evaluations of the strengths and weaknesses of each model to analyse the authenticity of an image. It combines various models, including:

The CLIP method (Contrastive Language-Image Pre-training) involves connecting image and text by learning common representations. To detect deepfakes, the CLIP method analyses images and compares them with their textual descriptions to identify inconsistencies and visual artefacts. The DNF (Diffusion Noise Feature) method uses current image-generation architectures (called diffusion models) to detect deepfakes. Diffusion models are based on an estimate of the amount of noise to be added to an image to cause a “hallucination”, which creates content out of nothing, and this estimate can be used in turn to detect whether an image has been generated by AI. The DCT (Discrete Cosine Transform) method of deepfake detection analyses the spatial frequencies of an image to spot hidden artefacts. By transforming an image from the spatial domain (pixels) to the frequency domain, DCT can detect subtle anomalies in the image structure, which occur when deepfakes are generated and are often invisible to the naked eye.

The Thales team behind the invention is part of cortAIx, the Group’s AI accelerator, which has over 600 AI researchers and engineers, 150 of whom are based at the Saclay research and technology cluster south of Paris and work on mission-critical systems. The Friendly Hackers team has developed a toolbox called BattleBox to help assess the robustness of AI-enabled systems against attacks designed to exploit the intrinsic vulnerabilities of different AI models (including Large Language Models), such as adversarial attacks and attempts to extract sensitive information. To counter these attacks, the team develops advanced countermeasures such as unlearning, federated learning, model watermarking and model hardening.

In 2023, Thales demonstrated its expertise during the CAID challenge (Conference on Artificial Intelligence for Defence) organised by the French defence procurement agency (DGA), which involved finding AI training data even after it had been deleted from the system to protect confidentiality.

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies specialized in three business domains: Defence & Security, Aeronautics & Space, and Cybersecurity & Digital identity.

It develops products and solutions that help make the world safer, greener and more inclusive.

The Group invests close to €4 billion a year in Research & Development, particularly in key innovation areas such as AI, cybersecurity, quantum technologies, cloud technologies and 6G.

Thales has close to 81,000 employees in 68 countries. In 2023, the Group generated sales of €18.4 billion.

1 2023 Gartner® Report on Emerging Cybersecurity Risks.

2 Morphing involves gradually changing one face into another in successive stages by modifying visual features to create a realistic image combining elements of both faces. The final result looks like a mix of the two original appearances.

/sites/default/files/prezly/images/MD%28c%29Thales.png Documents [Prezly] PR_Thales’s Friendly Hackers unit invents metamodel to detect AI-generated deepfake images.pdf Contacts Cédric Leurquin 20 Nov 2024 Type Press release Structure Defence and Security Defence Artificial intelligence is the central theme of this year’s European Cyber Week from 19-21 November in Rennes, Brittany. In a challenge organised to coincide with the event by France's Defence Innovation Agency (AID), Thales teams have successfully developed a metamodel for detecting AI-generated images. As the use of AI technologies gains traction, and at a time when disinformation is becoming increasingly prevalent in the media and impacting every sector of the economy, the deepfake detection metamodel offers a way to combat image manipulation in a wide range of use cases, such as the fight against identity fraud. prezly_706665_thumbnail.jpg Hide from search engines Off Prezly ID 706665 Prezly UUID 451f9d57-a15f-467e-aa00-7bf7539d0ddc Prezly url https://thales-group.prezly.com/thaless-friendly-hackers-unit-invents-metamodel-to-detect-ai-generated-deepfake-images Wed, 11/20/2024 - 10:00 Don’t overwrite with Prezly data Off

SC Media - Identity and Access

Semperis HIP conference tries to diagnose healthcare cybersecurity

Identity protection in healthcare was a dominant theme at last week’s Semperis HIP conference, with many participants offering guidance on how to improve medical cybersecurity.

Identity protection in healthcare was a dominant theme at last week’s Semperis HIP conference, with many participants offering guidance on how to improve medical cybersecurity.

Tuesday, 19. November 2024

KuppingerCole

Identity Security and Management – Why IGA Alone May Not Be Enough

Organizations are confronted with unprecedented challenges in managing and securing identities across hybrid environments due to the growing complexity of the digital landscape. While Identity Governance and Administration (IGA) solutions provide a foundation, the increasing complexity of identity ecosystems demands a more comprehensive approach to maintain visibility, security and control. Mo

Organizations are confronted with unprecedented challenges in managing and securing identities across hybrid environments due to the growing complexity of the digital landscape.

While Identity Governance and Administration (IGA) solutions provide a foundation, the increasing complexity of identity ecosystems demands a more comprehensive approach to maintain visibility, security and control.

Modern identity management requires solutions that can bridge the gap between IGA and directory management. Advanced tools can consolidate visibility across hybrid environments, provide fine-grained control, and enhance delegation capabilities. These solutions complement IGA by addressing the limitations of native directory management and improving overall security posture.

Martin Kuppinger, Principal Analyst at KuppingerCole Analysts, will look at the challenges of breadth vs. depth in managing target systems, but also the common scenario of different teams being responsible for different parts of the infrastructure such as the IGA solution vs. Microsoft Active Directory. He will provide insights not only when to use multiple solutions, but also discuss approaches on a TOM (Target Operating Model) that leads to consistent management of diverse environments.

Robert Kraczek, Global Strategist at One Identity will showcase how solutions like Active Roles can serve as connectors to various directories, providing a single pane of glass for hybrid environments. He will demonstrate how these tools enhance security, improve efficiency, and complement existing IGA solutions to address the complexities of modern identity ecosystems.




FindBiometrics

ID Tech Digest – November 19, 2024

Welcome to ID Tech’s digest of identity industry news. Here’s what you need to know about the world of digital identity and biometrics today: Major Australian Retail Chain Found in […]
Welcome to ID Tech’s digest of identity industry news. Here’s what you need to know about the world of digital identity and biometrics today: Major Australian Retail Chain Found in […]

SC Media - Identity and Access

Biometric Frontiers: Unlocking The Future Of Engagement - Andras Cser, Enza Iannopollo - ASW #308


FindBiometrics

Sumsub Finds AI-driven ‘Democratization of Fraud’ in New Report

Sumsub is offering a wide-angle view of the evolving threats and defenses in the identity fraud landscape through its 2024 Identity Fraud Report, with a particular focus on the intersection […]
Sumsub is offering a wide-angle view of the evolving threats and defenses in the identity fraud landscape through its 2024 Identity Fraud Report, with a particular focus on the intersection […]

SC Media - Identity and Access

Google DeGoogled, Hammerbarn, Blofeld, VMWare, DeepData, SafePay, Josh Marpet and... - SWN #432


FindBiometrics

Collabria Financial Partners with Trulioo on Digital Identity Verification for Credit Unions

Collabria Financial Services, a major Canadian credit card issuer serving credit unions, has implemented digital identity verification technology through a new partnership with Trulioo. The integration aims to enhance the […]
Collabria Financial Services, a major Canadian credit card issuer serving credit unions, has implemented digital identity verification technology through a new partnership with Trulioo. The integration aims to enhance the […]

Invixium’s IXM WEB 3.0 Attains LenelS2 OnGuard Certification for Biometric Access Control

Invixium’s IXM WEB 3.0 software platform has received factory certification for compatibility with versions 8.2 and 8.3 of LenelS2’s OnGuard access control system through the LenelS2 OpenAccess Alliance Program (OAAP). […]
Invixium’s IXM WEB 3.0 software platform has received factory certification for compatibility with versions 8.2 and 8.3 of LenelS2’s OnGuard access control system through the LenelS2 OpenAccess Alliance Program (OAAP). […]

IDEMIA Celebrates Another Round of Strong NIST Performances

IDEMIA Public Security has achieved top rankings across multiple categories in the latest National Institute of Standards and Technology (NIST) benchmark tests. These evaluations serve as the global standard for […]
IDEMIA Public Security has achieved top rankings across multiple categories in the latest National Institute of Standards and Technology (NIST) benchmark tests. These evaluations serve as the global standard for […]

Maryland MVA Launches Business App for Mobile Driver’s License Verification

The Maryland Department of Transportation’s Motor Vehicle Administration (MVA) has launched a new mobile application that enables businesses to accept mobile driver’s licenses (mDLs) and identification cards for secure, in-person […]
The Maryland Department of Transportation’s Motor Vehicle Administration (MVA) has launched a new mobile application that enables businesses to accept mobile driver’s licenses (mDLs) and identification cards for secure, in-person […]

UNDP Partners with cBrain to Drive Digital Transformation Across Africa

The United Nations Development Programme (UNDP) and Danish software company cBrain have established a partnership to accelerate digital transformation initiatives across Africa, focusing on digital public infrastructure, financial inclusion, and […]
The United Nations Development Programme (UNDP) and Danish software company cBrain have established a partnership to accelerate digital transformation initiatives across Africa, focusing on digital public infrastructure, financial inclusion, and […]

Sydney Pubs Deploy Facial Recognition to Block Self-Excluded Gamblers

Facial recognition technology is being implemented in Sydney pubs as part of an initiative to identify and prevent self-excluded gamblers from accessing gaming venues. The system, developed through a partnership […]
Facial recognition technology is being implemented in Sydney pubs as part of an initiative to identify and prevent self-excluded gamblers from accessing gaming venues. The system, developed through a partnership […]

Vision-Box Launches Biometric Border Control System at Sint Maarten Airport

Vision-Box has successfully implemented its Seamless Border Programme at Princess Juliana International Airport (SXM) in Sint Maarten, marking a significant advancement in border control technology at the Caribbean aviation hub. […]
Vision-Box has successfully implemented its Seamless Border Programme at Princess Juliana International Airport (SXM) in Sint Maarten, marking a significant advancement in border control technology at the Caribbean aviation hub. […]

Major Australian Retail Chain Found in Breach of Privacy Laws Over Facial Recognition Use

The Office of the Australian Information Commissioner (OAIC) has determined that Bunnings Group Limited violated Australian privacy laws by implementing facial recognition technology across 63 of its hardware stores in […]
The Office of the Australian Information Commissioner (OAIC) has determined that Bunnings Group Limited violated Australian privacy laws by implementing facial recognition technology across 63 of its hardware stores in […]

Ontology

Ontology 7th Anniversary

The Next Chapter This year, Ontology marks its 7th anniversary with exciting campaigns that reward community engagement, stakers, and node operators. With a total prize pool of 5900 ONG, this celebration promises a little something for everyone. From staking rewards to interactive social challenges, Ontology invites you to be part of this milestone event. Dive in from November 18 to December
The Next Chapter

This year, Ontology marks its 7th anniversary with exciting campaigns that reward community engagement, stakers, and node operators. With a total prize pool of 5900 ONG, this celebration promises a little something for everyone. From staking rewards to interactive social challenges, Ontology invites you to be part of this milestone event. Dive in from November 18 to December 15 and see what you can achieve.

Campaign Highlights

Stake & Prosper (11.18–12.2): 2000 ONG are up for grabs, with rewards for stakers across multiple tiers starting at over 100 ONT. Boost your stake, climb the leaderboard, and claim your share.

Boost Your Node (starting 12.15): A 700 ONG prize pool awaits, along with additional ONT liquidity for the standout nodes. This campaign will spotlight the most active nodes in terms of APR and user growth- showcasing the power and dedication of our network’s backbone.

Get Involved with Fun Community Campaigns

Prove Your Humanity (11.18–12.2): Partnering with Orange Protocol, this initiative highlights the importance of identity and security, offering 1000 ONG to users who verify their humanity score. It is a reminder that in a world leaning digital, trust and verification matter more than ever.

Ontonaut’s Insight Rally on TaskOn (11.18–12.2): With 500 ONG set aside, this rally calls on community members to share insights, join discussions, and engage on social platforms.

OntoStars: 7th Anniversary Social Challenge on Galxe (11.18–12.2): Complete tasks, join the conversation, and celebrate alongside us. Participants can tap into a 200 ONG prize pool — because what’s an anniversary without a bit of a fanfare?

Zealy Quests: Take part in two engaging Zealy quests designed to weave ecosystem and industry storytelling. Rewards include 300 ONG and exclusive NFTs for those who shine the brightest.

Join us as we celebrate this milestone and witness Ontology’s unwavering commitment to building a secure, engaged, and inclusive ecosystem. Think you have what it takes? Get in on the action and don’t miss out on the rewards.

Ontology 7th Anniversary was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


Radiant Logic

ITGC Controls: Why Are They Essential And How To Execute Them?

ITGCs are an essential part of your strategy for securing and enforcing access rights. Find out why... And how to optimize them! The post ITGC Controls: Why Are They Essential And How To Execute Them? appeared first on Radiant Logic.

CyberArk Privilege Cloud: Protect your Privileged Accounts with a SaaS Solution

Learn more about the benefits of CyberArk's Privilege Cloud, a PAM solution in SaaS mode, and discover ways to extend its capabilities. The post CyberArk Privilege Cloud: Protect your Privileged Accounts with a SaaS Solution appeared first on Radiant Logic.

Are User Access Review And Access Recertification The Same Thing?

The user access review is a control function that is separate from the recertification of access rights. Learn more about why and when to use it. The post Are User Access Review And Access Recertification The Same Thing? appeared first on Radiant Logic.

Is Your European Company Prepared For The Digital Operational Resilience Act (DORA)?

Uncover DORA regulations, five key focus areas, and the significance of operational resilience in finance get compliant with Radiant Logic's secure software solutions. The post Is Your European Company Prepared For The Digital Operational Resilience Act (DORA)? appeared first on Radiant Logic.

Reducing IAM Technical Debt with an Identity Data Fabric Approach 

Gartner lists 5 key challenges that result from IAM technical debt; get our four step approach to a solution based on our Identity Data Fabric. The post Reducing IAM Technical Debt with an Identity Data Fabric Approach  appeared first on Radiant Logic.

ISMG Survey Finds that Many Identity Teams Lack Visibility and Operational Maturity

ISMG research surveyed over 100 IT leaders on their IAM challenges, and we’re pleased to share the results with you. The post ISMG Survey Finds that Many Identity Teams Lack Visibility and Operational Maturity appeared first on Radiant Logic.

Spring is Springing: What’s New from Radiant Logic in Spring 2024

Learn what Radiant Logic is bringing to identity in 2024. Our spring release makes it easy to connect, manage, and govern identity data—see what AI can do for identity. The post Spring is Springing: What’s New from Radiant Logic in Spring 2024 appeared first on Radiant Logic.

Making Identity Hygiene a Non-Negotiable for Organizational Security

Identity hygiene is the number one common denominator in any IAM program.  Without clean data there are no accurate results.   The post Making Identity Hygiene a Non-Negotiable for Organizational Security appeared first on Radiant Logic.

Artificial Intelligence and Identity and Access Management

Dive into the powerful influence will AI-Driven IAM, IGA, Generative AI for IAM have in 2024, and what advantages will you find with an IAM Copilot on your side. The post Artificial Intelligence and Identity and Access Management appeared first on Radiant Logic.

Revolutionizing IAM with RadiantOne AI and AIDA

Learn how generative AI technology will revolutionize the way organizations govern and visualize identity data with unprecedented speed and accuracy. The post Revolutionizing IAM with RadiantOne AI and AIDA appeared first on Radiant Logic.

SC Media - Identity and Access

AWS expands MFA requirements following strong adoption rates

Following the introduction of compulsory MFA for root users in May 2024, over 750,000 root users have activated MFA, with adoption rates doubling after AWS included FIDO2 passkeys as an authentication option.

Following the introduction of compulsory MFA for root users in May 2024, over 750,000 root users have activated MFA, with adoption rates doubling after AWS included FIDO2 passkeys as an authentication option.


Thales Group

Protecting aircraft with artificial intelligence: Thales and partners selected for first European project to develop sovereign AI for embedded cyberdefence

Protecting aircraft with artificial intelligence: Thales and partners selected for first European project to develop sovereign AI for embedded cyberdefence prezly Tue, 11/19/2024 - 14:00 Thales has been selected for the Artificial Intelligence Deployable Agent (AIDA) project funded by the European Commission through the European Defence Fund (EDF). A total of 28 European industry
Protecting aircraft with artificial intelligence: Thales and partners selected for first European project to develop sovereign AI for embedded cyberdefence prezly Tue, 11/19/2024 - 14:00 Thales has been selected for the Artificial Intelligence Deployable Agent (AIDA) project funded by the European Commission through the European Defence Fund (EDF). A total of 28 European industry partners, start-ups and research centres have joined forces on this project to develop a sovereign AI-enabled cybersecurity agent to protect aircraft systems from cyberattacks. The goal of this three-and-a-half-year European project is to design an AI with an autonomous or semi-autonomous response capability to provide cybersecurity protection for aircraft systems such as onboard computers and electronic warfare systems on combat aircraft, which are vulnerable to increasingly sophisticated cyberattacks in today’s high-intensity conflicts. AIDA is the first European structural framework project in support of the NATO concept of Autonomous Intelligent Cyberdefence Agent (AICA).1
Created by AI

Thales is technical coordinator for the AIDA project funded by the European Commission, with CR14 in Estonia in charge of overall project coordination.


This EDF project is a response to three major challenges faced by the armed forces today: attack surfaces are growing due to battlespace digitisation; the cyberattack detection-response chain needs to be automated due to the ever-greater use of autonomous systems such as drones and robots; and AI is being used ever more widely both to launch and respond to cyberattacks.

Christophe Salomon, Executive Vice President, Secure Communications & Information Systems, Thales: “This project initiated by the European Union is fundamental to the security of our combat systems and the sovereignty of our cyberdefence capabilities. It is a chance for Thales to consolidate its strengths in onboard aircraft systems and sovereign cybersecurity solutions, and a further opportunity to leverage our AI hacking expertise. Thales's AI accelerator, and in particular cortAIx, will be directly involved in the AIDA project. The ultimate goal is to employ AI-enabled techniques for detecting threats and protecting aircraft systems from the growing risks and dangers encountered in today’s high-intensity, technology-driven conflicts.”

Responding to the 2023 European Defence Fund call for projects for the development of deployable autonomous AI agents,1 Thales submitted an innovative proposal based on the training of intelligent cyberdefence agents capable of identifying, protecting, detecting and responding to cyberthreats in real time in the five military operating domains:2 land, air, sea, space and cyberspace.

Thales will also lead the project to develop a prototype aircraft using frugal AI agents to protect electronic warfare equipment installed on combat aircraft. This prototype will be tested, using Thales’s Cybels Analytics solution in particular, in scenarios including cyber-electromagnetic threats and advanced adversarial AI attacks.

AI is being used increasingly in the theatre of operations to increase the detection performance of air defence radars, for example, and to help plan tactical missions and assign tasks to swarms of drones and robotic systems. This type of AI must be reliable, robust and cybersafe to prevent it being exploited by hostile forces in any environment (land, sea, air, space and cyberspace). To counter this type of threat, Thales’s Friendly Hacker Unit will conduct a battery of adversarial AI attacks and define appropriate countermeasures to ensure that these cyberdefence AI agents can never become targets themselves.

Global leader in data protection and cybersecurity

As a world leader in cybersecurity, with more than 5,800 experts in 68 countries, Thales is involved at every stage in the civil and defence value chain: Identify, Protect, Detect, Respond, Restore. Thales develops sovereign products including encryptors and sensors for governments and institutions to protect their critical information systems, as well as sovereign cyberthreat detection products to protect embedded and onboard systems. Thales is a trusted partner of the Galileo satellite navigation system, operating a number of national encryption laboratories in Europe and supplying NATO member countries with the only tactical IP encryptor with "Cosmic Top Secret" security certification. Thales is also a strategic partner of the German, UK, French and Belgian defence ministries for the construction and handover of key management centres and infrastructure.

AI at Thales

Thales is a major player in trusted, cybersafe, transparent, explainable and ethical AI for armed forces, aircraft manufacturers and critical infrastructure providers. The Group employs over 600 engineers specialising in AI and around 100 doctoral candidates are conducting their AI research with Thales. Organised within Thales’s AI accelerator for research (AI Lab), systems, including decision support systems, (AI Factory) and sensors, including sonar, radar, radios and optronics, (AI Sensors), these experts are helping to incorporate AI into over 100 of Thales’s products and services. Thales’s AI capabilities draw on the most advanced sensor and system technologies to address the full spectrum of user requirements in the defence, aviation, space, cybersecurity and digital identity industries. Trusted AI is designed to meet the specific security and sovereignty needs of Thales’s customers. It brings greater efficiency to data analysis and decision support and speeds up the detection, identification and classification of objects of interest and target scenes, while taking account of specific constraints such as cybersecurity, embeddability and frugality in critical environments.

In 2023, the Group was Europe’s top patent applicant in the field of AI for mission-critical systems. Also in 2023, the Group's Friendly Hacker Unit demonstrated its credentials at the CAID challenge (Conference on Artificial Intelligence for Defence) organised by the French defence procurement agency (DGA), which involved finding AI training data even when it had been deleted from the system to preserve confidentiality.

Thales’s European partners in the AIDA project:

SIHTASUTUS CR14 (CR14)

THALES SIX GTS France (TSGF)

THALES SA (TRT)

THALES AVS FRANCE SAS (TAVS)

THALES DMS FRANCE SAS (TDMS)

INDRA SISTEMAS SA (IND)

LEONARDO - SOCIETA PER AZIONI (LDO)

TELESPAZIO SPA (TPZ)

AIT AUSTRIAN INSTITUTE OF TECHNOLOGY GMBH (AIT)

SPACE HELLAS ANONYMI ETAIREIA SYSTIMATA KAI YPIRESIES TILEPIKOINONIONPLIROFORIKIS ASFALEIAS - IDIOTIKI EPICHEIRISI PAROCHIS YPERISION ASFA (SPH)²

HONEYWELL INTERNATIONAL SRO (HON)

WOJSKOWA AKADEMIA TECHNICZNA IM.JAROSLAWA DABROWSKIEGO (WAT)

EVIDEN TECHNOLOGIES SRL (EVD)

Decent Cybersecurity s. r. o. (DEC)

GYALA S.R.L. (GYA)

FORSVARETS FORSKNINGINSTITUTT (FFI)

SensorFleet Oy (SEN)

NIXU OYJ (NIX)

Aliter Technologies, a.s. (ALI)

THALES EDISOFT PORTUGAL, S.A. (EDI)

HITEC LUXEMBOURG SA-HITEC (HIT)

MINISTERUL APARARII NATIONALE (MET)

INSTITUTO SUPERIOR DE ENGENHARIA DO PORTO (ISEP)

DOTOCEAN (DOT)

WB Electronics S.A. (WBE)

ADVOKAADIBUROO SORAINEN OU (SOR)

HarfangLab SAS (HAR)

AKHEROS SAS (AKH)

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies specialized in three business domains: Defence & Security, Aeronautics & Space, and Cybersecurity & Digital identity.

It develops products and solutions that help make the world safer, greener and more inclusive.

The Group invests close to €4 billion a year in Research & Development, particularly in key innovation areas such as AI, cybersecurity, quantum technologies, cloud technologies and 6G.

Thales has close to 81,000 employees in 68 countries. In 2023, the Group generated sales of €18.4 billion.


​1 Autonomous Intelligent Cyber Defense Agent (AICA): A Comprehensive Guide | SpringerLink: https://link.springer.com/book/10.1007/978-3-031-29269-9

2EDF-2023-DA-CYBER-DAAI: Deployable Autonomous AI Agent

3 Multi-Domain Operations in NATO – Explained – NATO’s ACT: https://www.act.nato.int/article/mdo-in-nato-explained/#:~:text=Within%20NATO’s%20structure%20there%20are,independent%20entities%20within%20national%20militaries.

/sites/default/files/prezly/images/Generic%20banner%20option%202%20%282%29_12.png Contacts Cédric Leurquin 19 Nov 2024 Type Press release Structure Defence and Security Defence Thales is technical coordinator for the AIDA project funded by the European Commission, with CR14 in Estonia in charge of overall project coordination. prezly_705643_thumbnail.jpg Hide from search engines Off Prezly ID 705643 Prezly UUID aa6a8b3b-d7e1-4436-bcc5-dec6318ac74b Prezly url https://thales-group.prezly.com/protecting-aircraft-with-artificial-intelligence-thales-and-partners-selected-for-first-european-project-to-develop-sovereign-ai-for-embedded-cyberdefence Tue, 11/19/2024 - 15:00 Don’t overwrite with Prezly data Off

Thales empowering British Army with integrated technologies at Army Warfighting Experiment

Thales empowering British Army with integrated technologies at Army Warfighting Experiment Language English simon.mcsstudio Tue, 11/19/2024 - 09:42 Thales in the UK, a leader in advanced technologies, showcased its advanced integration capabilities at this year’s Army Warfighting Experiment (AWE).  Building on last year’s success, Thales demonstrat
Thales empowering British Army with integrated technologies at Army Warfighting Experiment Language English simon.mcsstudio Tue, 11/19/2024 - 09:42

Thales in the UK, a leader in advanced technologies, showcased its advanced integration capabilities at this year’s Army Warfighting Experiment (AWE). 

Building on last year’s success, Thales demonstrated a comprehensive suite of solutions designed to provide soldiers with enhanced situational awareness, communication, and electronic warfare capabilities, optimised for operations in the urban environment.  

Demonstrated Capabilities: 

Thales' demonstration at AWE was centred around the integration of ground-based systems with airborne intelligence platforms.  These included a Thales ISTAR Node, SquadNet compact soldier radios, an HFXL radio and Storm 2 – Thales’ ultra-compact, lightweight personal CEMA system.

AWE saw Storm 2 in the hands of soldiers for the first time, utilising the system to identify data signals as they moved through the urban environment.  Networked via Thales’ SquadNet radios, upon identifying a potentially hostile network, Storm 2 targeted the ISTAR sensor onto the relevant location.  The resulting intelligence was then rapidly sent back to the soldier, with imagery and wider data on the potential threat.  All these processes, from initial detection to the intelligence delivery, utilised Thales’ advanced algorithms to dramatically simplify the process – removing cognitive burdens from the end-user.

Thales also demonstrated its newly released data-first HFXL radio with Allied forces.  This technologically advanced wideband HF radio integrates with SquadNet radios to produce a comprehensive communication network that links frontline soldiers, commanders, and intelligence platforms, in real time, without the need for any communications infrastructure.

The prototype ISTAR Node is at the heart of this dynamic architecture. Ensuring that the electro-optical camera data from an Uncrewed Air System (UAS) overhead is available to soldiers via their SquadNet radios, and that electronic intelligence is available to the UAS crew from Storm 2 as well as relayed back to the headquarters. 

The ISTAR Node also connected Allied forces, with targets acquired by a French sighting system and Storm 2 CEMA contacts used to cross-cue the UAS and resulting imagery delivered to soldiers on the ground. A subsequent attack by ground forces was monitored by the UAS, with the ISTAR Node using Thales’ HFXL radio bearer to relay the full tactical situational awareness picture (imagery and CEMA) back to the headquarters.

Thales Technologies at AWE: Storm 2: Lightweight sensing and electronic warfare systems carried by soldiers that can detect hostile networks, protect users against RCIEDs and deploy CEMA offensively to achieve operational advantage through electronic influence. SquadNet: A compact, yet highly capable soldier radio, enhancing decision-making on the battlefield, and enabling the networking of disparate sensors to provide a comprehensive tactical picture. HFXL Radio: Thales' revolutionary data-first radio, enabling long-range, infrastructure-free communication, allowing commanders to maintain operational control at distance. ISTAR Node: Thales’ prototype Digital Architecture for UAS enables processing, exploitation and dissemination of tactical data between aerial and ground-based units to accelerate the sense-decide-effect chain. It will also facilitate rapid capability insertion to get the most out of innovative technologies such as AI in the tactical battle.

John Dix, UK Sales Manager for Land Communications, explained the importance of these technologies: "The integration of these systems is a game-changer for the modern battlefield. For the first time, soldiers can get immediate, accurate intelligence from UAVs while out on operations. At the same time, commanders can make informed decisions based on real-time data, transmitted securely over HFXL radios, which is critical for operations in denied environments.  Taken together, this ensures our troops will have a tactical advantage in any urban operation.”

Delivering Modernisation and Protection:

By integrating tactical UAS through Thales’ ISTAR Node with ground-based systems like Storm 2, the British Army can act more decisively, leveraging real-time intelligence to blunt and dislocate enemy operations. These advancements not only bolster protection but also enable offensive actions, creating a more agile and flexible force.

The newly released HFXL radio also plays a key role in these developments.  Offering secure, infrastructure-free communication over long distances, this data-first radio allows commanders to maintain operational control even in denied environments. Compared to other wideband radios on the market, Thales’ HFXL system is unique to using multiple non-contiguous channels and a cognitive engine that continually scans the HF spectrum and selects the optimum frequencies.  This makes HFXL inherently more secure, more efficient, harder to jam and capable of transmitting large data packages without exposing soldiers’ positions.

Why It Matters:

Thales' integrated solutions demonstrated at AWE reflect the future of combat, where advanced algorithms, sensors, uncrewed systems, and communication technologies come together to ensure that British soldiers remain better informed, better protected, and more lethal. By providing seamless data communication and tactical intelligence from the frontline to command, Thales empowers soldiers to respond quickly and effectively to threats, ensuring the British Army maintains its operational advantage.

John Dix added,

AWE is critical for accelerating the modernisation of the British Army.  By getting these technologies into the hands of soldiers early in the product-development cycle, we can harness this feedback to fine-tune our systems, and fast-track their deployment, to ensure British and allied forces stay ahead in a world of rapidly evolving threats.
 

/sites/default/files/database/assets/images/2024-11/377165834790913-Banner.png 19 Nov 2024 United Kingdom Thales in the UK, a leader in advanced technologies, showcased its advanced integration capabilities at this year’s Army Warfighting Experiment (AWE). Type News Hide from search engines Off

Tokeny Solutions

Tokeny’s Talent | Satjapong’s Story

The post Tokeny’s Talent | Satjapong’s Story appeared first on Tokeny.
Satjapong Meeklai is Senior DevSecOps Engineer at Tokeny.  Tell us about yourself!

Hi guys. My name is Satjapong Meeklai. I’m Thai, born and live in Bangkok. I’ve always been passionate about technology since I was young so I spent most of my time using computers and the internet to learn new things. After finishing high school, I decided to study computer science at a university in Thailand. And fortunately after graduating, I got a Japanese government scholarship to continue my higher education in Tokyo. After obtaining a master degree, I came back to Thailand and devoted myself as a technician because I love doing hands-on works, trying various types of roles in this field and eventually, decided to focus on becoming a DevSecOps engineer who work in Fintech and Web3 startups because I believe in the industries so I’m trying to make some contributions to this major shift to happen which hopefully will create positive impacts to other people around the world.

What were you doing before Tokeny and what inspired you to join the team?

After completing my master’s degree, I was drawn to data science and AI, so I pursued a role as a data scientist. I joined an early-stage startup focused on sentiment analysis, where I wore many hats due to the small team. Besides building machine learning models, I handled data cleaning, preprocessing, backend development, and DevOps. Through this experience, I discovered a passion for DevOps, leading me to shift my career in that direction.

I continued working in startups, valuing their fast-paced, impactful environments. For four years, I was the lead DevOps engineer at Opn, a Thai fintech company that became a unicorn. Managing a team of nine, I contributed meaningfully to the company’s success, which remains one of my proudest achievements.

While at Opn, I became interested in Blockchain and Web3, eventually leaving to join a small Web3 startup in digital asset custody. Although the company closed due to market challenges, my interest in Web3 grew. This led me to Tokeny, a platform for tokenizing traditional assets, which I see as a bridge to open finance in the Web3 era. I’m excited to help drive this transformation.

How would you describe working at Tokeny?

So far it has been very pleasant for me. People here are kind, nice, but work hard. They’re good at what they do. I can feel the determination of what we want to build and deliver to the community. We are here to create changes. That is what I can tell after working here for several months.

What are you most passionate about in life?

I think not dying in vain would probably be something I’m passionate about the most. I want the existence of myself to have a positive impact and influence on people around me. I no longer dream of changing the world myself but I’d like to support, contribute, and be a part of that something together with others to create positive results for the community, society, country, and/or the world instead. In the end, what I care about the most is myself being in a position where I’m proud and happy about myself, the decisions I made, the things I do, and do not regret how I live and what I’ve done to people around me.

What is your ultimate dream?

Be one of the early employees of an incredibly successful tech company while being a good, caring leader to my family.

What advice would you give to future Tokeny employees?

First you should believe in your own vision. Then try to align that vision with the company. Then you will find that whatever you do, either for yourself or the company, is meaningful by themselves.

What gets you excited about Tokeny’s future?

I’m excited about our mission to build the world of open finance and how this will change the world of traditional finance. It’s pretty interesting to see what will change in 5 to 10 years from now with the power of the Web3 industry and Tokeny.

He prefers: check

Coffee

Tea

Movie

check

Book

Work from the office

Work from home

check

Hybrid

check

Dogs

Cats

Call

check

Text

check

Burger

Salad

check

Mountains

Ocean

Wine

check

Beer

check

Countryside

City

check

Slack

Emails

check

Casual

Formal

check

Crypto

Fiat

Night

check

Morning

More Stories  Tokeny’s Talent|Barbora’s Story 28 May 2021 Tokeny’s Talent|Sefa’s Story 24 November 2022 Tokeny’s Talent|Shurong’s Story 20 November 2020 Tokeny’s Talent | Omobola 25 July 2024 Tokeny’s Talent|Ben’s Story 25 March 2022 Tokeny’s Talent|Joachim’s Story 23 April 2021 Tokeny’s Talent|Ivie’s Story 1 July 2022 Tokeny’s Talent|Eva’s Story 19 February 2021 Tokeny’s Talent | Jordi’s Story 1 November 2024 Tokeny’s Talent | Cristian 13 June 2024 Join Tokeny Solutions Family We are looking for talents to join us, you can find the opening positions by clicking the button. Available Positions

The post Tokeny’s Talent | Satjapong’s Story appeared first on Tokeny.


KuppingerCole

Analyst's View: Synthetic Data

by Anne Bailey Synthetic data generation is a highly innovative solution to challenges of test data quality, data sharing, and data privacy and security. Senior Analyst Annie Bailey shares insights from the inaugural Leadership Compass on Synthetic Data on the dynamic development of this market.

by Anne Bailey

Synthetic data generation is a highly innovative solution to challenges of test data quality, data sharing, and data privacy and security. Senior Analyst Annie Bailey shares insights from the inaugural Leadership Compass on Synthetic Data on the dynamic development of this market.

Tuesday, 19. November 2024

SC Media - Identity and Access

Identity Security: Navigating the New Normal with Dr. Sean Murphy - Sean Murphy - CSP #201


Spruce Systems

Industry Spotlight: Top 10 Ways Verifiable Digital Credentials Can Transform Government

Explore how verifiable digital credentials can address challenges in government identity systems, offering secure, efficient, and privacy-focused solutions for a range of applications.
A Need for Verifiable Digital Credentials in Government

Government agencies face significant challenges in delivering secure, reliable identity credentialing and verification processes that are built for today’s digital world. Protecting residents' data from unauthorized access is essential, as is providing secure, accessible ways for residents to easily verify their identities across digital and physical channels as they go about their day-to-day.

The outdated, paper-based systems that exist today slow down government processes and introduce vulnerabilities, such as fraud, inefficiencies, and elevated administrative costs. These challenges not only affect the security and privacy of residents’ data, but also put a strain on government resources. We believe that to meet the demands of a digital-first society, agencies must transition away from paper-based credentials, which are vulnerable to tampering, to secure, verifiable digital credentials. Read on to learn more about the top 10 real world applications in government today, and how SpruceID is helping partner with agencies for digital transformation.

Today’s Top 10 Real-World Applications

When it comes to verifiable digital credentials in government, 10 use cases is just barely scratching the surface. However, the list below outlines several in-demand applications today where digital credentials bring significant advantages, greatly benefiting both government entities and the people they serve:

Mobile Driver’s License (mDL)Physical IDs may be the norm, but they are easily lost, stolen, or damaged, making residents vulnerable to identity theft and fraud. Law enforcement, businesses, and government agencies spend valuable time verifying IDs, and the reliance on physical cards slows down services and increases errors. With high-assurance verifiable digital credentials (VDCs), verification becomes faster, more secure, and far less vulnerable to tampering with. The added convenience and security offered by a mobile driver’s license creates a streamlined, fraud-resistant environment where residents don’t have to rely on easily compromised physical cards. Read more about how SpruceID helped the State of California implement their mobile driver’s license program, and the benefits they’ve seen so far. Outdoor Licenses and PermitsToday’s outdoor licenses and permits (such as boating or fishing licenses) are largely paper-based, which are easy to lose or counterfeit, and enforcing them can be difficult. Conservation officers lack real-time verification tools, making enforcement difficult and allowing illegal activities to go unchecked. Digital permits with VDCs provide instant, reliable proof with an easy way to verify the credentials, supporting conservation efforts and reducing illegal activities—all while protecting public lands and waters. Learn about how SpruceID worked with Utah to launch digital off-road vehicle permits and how they’ve benefitted. Incarcerated Individuals and Criminal Justice and Law EnforcementToday, approximately 27% of formerly incarcerated individuals are unemployed. This statistic highlights the significant barriers to employment these individuals face, particularly in accessing proper identification. The criminal justice system’s reliance on outdated, paper-based records not only creates vulnerabilities in identity management and record accuracy but also complicates access to essential rehabilitative services. These inefficiencies lead to security risks, identity errors, and hindered re-entry support. Verifiable digital credentials can help facilitate access to job applications, housing, and social services, while removing barriers to re-enter society and rebuild their lives. Marriage and Birth CertificatesPaper marriage certificates, birth certificates, and even social security cards are essential but vulnerable to loss, damage, and forgery, which complicates access to legal rights and government services. Verifying these documents can also be a slow process, creating roadblocks for individuals needing to prove familial status for health benefits, citizenship, and legal matters. VDCs ensure secure, instant access to these vital records, protecting individuals’ identities and preventing fraudulent claims. They can also help streamline processes such as enrolling your new baby onto your health insurance — as discussed in our recent blog post. Social Services Access (SNAP/Medicaid)Accessing social services with paper-based documentation is cumbersome and prone to errors. Individuals who qualify may face delays or rejections, while ineligible recipients can exploit the system, diverting funds from those in need. By using VDCs, agencies can improve efficiency and reduce fraud by allowing for real-time verification of eligibility, ensuring benefits reach the right individuals faster and reducing strain on the social services infrastructure. Civic ParticipationFraud and manipulation risks increase, threatening the integrity of civic participation such as responding to RFCs (requests for comments) or submitting feedback to political representatives. Verifiable digital credentials create a secure, accessible way to ensure that for example, someone is a resident and not a bot, without oversharing information. This approach has also been considered for simultaneously improving our voting systems’ security and engagement with the new generation. Land and Property RecordsPaper-based land records can be misplaced, tampered with, or falsified, leading to property disputes, unclear ownership, and legal issues that impact families and businesses. To mitigate these issues and more, VDCs provide a secure way to manage property records, ensure property rights are protected, and enhance transparency in property ownership. Disaster ReliefIn times of disaster, quickly verifying the identity and eligibility of individuals seeking relief is crucial but challenging with traditional paper documents. When someone loses their paper documents, aid can be delayed, misallocated, or vulnerable to fraud, hindering the response and leaving affected people without timely support. VDCs allow for quick, secure verification of those in need, ensuring relief reaches the right people and enable response teams to act efficiently during critical moments. Our credentials are accessible even in remote areas, without wifi or cell service. Government Employee Access and VerificationCurrent reliance on physical IDs for government employees and veterans can lead to unauthorized access, fraud, and security breaches. Verifiable digital credentials provide a secure way to verify identity of government employees such as military or veterans, protecting restricted spaces and sensitive information, while allowing instant access to necessary services and benefits that are exclusive to military and veterans, among other government employees. Cross-Border Travel CredentialsPhysical cross-border travel documents such as customs clearance forms, visas, and health certificates and more are vulnerable to forgery and theft, creating security risks and causing delays at border crossings. VDCs offer a way to consolidate identity, customs, and health credentials into one streamlined verification process. This speeds up clearance, improves safety, and enhances global security and health compliance, delivering a more efficient experience for travelers and border authorities alike.

SpruceID’s Solution

SpruceID works with a variety of public sector agencies to issue verifiable digital credentials, creating a system of trust, security, and convenience that can be applied across numerous government applications. Our Credible platform supports issuing a range of digital credentials, from mobile driver’s licenses (mDLs) to professional certifications. These credentials use cryptographic digital signatures, ensuring that they cannot be falsified, shared through screenshots, or recreated by AI-generated deepfakes.

Our solutions prioritize minimal disclosure of personal data, enabling residents to verify credentials with only essential information (for example, only needing to show your age to enter a bar while keeping your personal address hidden). This keeps personal data secure and compliant with privacy regulations, all while eliminating government tracking or surveillance. In addition, Credible helps to drive increased efficiency, as digital processes streamline verifications and reduce administrative bottlenecks, ultimately saving time for both agencies and residents. By minimizing reliance on paper, agencies significantly lower overhead costs related to printing, mailing, and administrative handling, creating a cost-effective, privacy-centric solution.

Shaping the Future of Digital Identity in Government

We envision a future where government agencies fully leverage verifiable digital credential solutions that align with standards and advance alongside open-source industry collaborations. 

Through partnerships with agencies such as the California DMV, we demonstrate our commitment to creating scalable, interoperable solutions. By embracing VDCs, government agencies can protect citizens, enhance services, and reduce administrative burdens, empowering everyone securely and efficiently. To learn more about how SpruceID could help your government agency, visit our website and get in touch with us.

Contact Us

SC Media - Identity and Access

Semperis HIP conference Day One: Microsoft mea culpa, a call for cybersecurity coalitions

Semperis’ Hybrid Identity show kicks off with a Microsoft mea culpa, hospital war games and an appeal for a coalition of the willing among cyber defenders.

Semperis’ Hybrid Identity show kicks off with a Microsoft mea culpa, hospital war games and an appeal for a coalition of the willing among cyber defenders.


Semperis HIP conference Day Two: Ransomware, resilience and identity reckoning

The second day of Semperis’ HIP conference featured frank advice about recovering from a ransomware attack, the nature of business resilience and the importance of identity security.

The second day of Semperis’ HIP conference featured frank advice about recovering from a ransomware attack, the nature of business resilience and the importance of identity security.


liminal (was OWI)

Solving for Trust by Design: The Identity Authorization Network Opportunity

The post Solving for Trust by Design: The Identity Authorization Network Opportunity appeared first on Liminal.co.

The Market for Identity Authorization Networks

The post The Market for Identity Authorization Networks appeared first on Liminal.co.

Thales Group

Thales confirmed as trusted provider to the G-Cloud 14 Framework

Thales confirmed as trusted provider to the G-Cloud 14 Framework Language English simon.mcsstudio Mon, 11/18/2024 - 09:39 Thales has been selected as a supplier on the UK Government’s G-Cloud 14 framework, marking another significant milestone in the company's commitment to providing cutting-edge cyber secured cloud solutions to the public sector. 
Thales confirmed as trusted provider to the G-Cloud 14 Framework Language English simon.mcsstudio Mon, 11/18/2024 - 09:39

Thales has been selected as a supplier on the UK Government’s G-Cloud 14 framework, marking another significant milestone in the company's commitment to providing cutting-edge cyber secured cloud solutions to the public sector. 

The G-Cloud 14 framework is designed to simplify the procurement process for public sector organisations, enabling them to access a wide range of cloud-based services quickly and efficiently. 

Tony Burton, Managing Director of Cybersecurity & Trust at Thales UK, said: “We're proud to play our part in the secure, digital transformation of the public sector, which is delivering greater efficiency and improved services to the benefit of everyone across the UK. Our involvement in the G-Cloud framework allows us to provide cutting-edge cloud solutions that enhance data security, streamline operations, and ensure that government agencies can focus on their core mission—serving the public effectively.”

Thales Cyber Digital Solutions now available on G-Cloud 14

We offer a wide range of cloud hosting, cloud software and cloud support services under the G-Cloud 14 framework, including:

PKI and Key Management Services

We deliver highly assured Public Key Infrastructure (PKI) consultancy, managed services and solutions. Our PKI deployments range from design and build solutions hosted in the cloud, on customer premises or in Thales secure locations, to High Assurance PKIaaS and cloud based SaaS. Our technical solutions are based around an assured and accredited architectural blueprint which uses a modular design so components can be tailored to suit customer needs, such as sovereignty, and technical or business requirements. G-Cloud 14 offers include PKI Services, PKI Health Check, Cloud Hardware Security Module, Key Management Software and Digital Trust.

Security Operations Centre (SOC) Consultancy

Our NCSC Assured cybersecurity consultancy and engineering services helps organisations transition to effective and secure use and deployment of cloud services. Since the founding of our consultancy business in 1989, we have delivered the highest standard of technical and business change consultancy to government clients who work in complex technical and highly regulated environments. G-Cloud 14 offers include Secure By Design, Cyber Health Check, Cyber Vulnerability Investigations, Cyber Consultancy, Cloud Cybersecurity Management and Risk Protection Services.

Managed Detection and Response

Our Managed Detection and Response (MDR) provides comprehensive threat detection and incident response capabilities to enhance customer security capabilities. We act as an extension of our customers’ security teams and provide proactive threat detection, enabling effective response to cyber threats. Key features include 24/7 monitoring and alerting, global threat intelligence and expertise, advanced analytics, incident response, cyber threat intelligence, threat hunting, forensic analysis, reporting, client collaboration and partnership approach, scalability, and a tailored platform. G-Cloud 14 offers include SOC Maturity Assessment, SOC Advisory and Consulting, SOC Managed Detection and Response, Managed Endpoint Detection & Response, SOC as a Service.

Secure Connectivity Solutions for Police and Government

Thales Secure Connectivity Solutions are a modular set of solutions including network access design, consultancy, deployment, monitoring, support and management securing Police and Government networks, managed and monitored 24x7x365 from UK FSC (Facility Security Clearance ) and PASF (Police Assured Secure Facility) secured sites. Thales’ High Assurance Remote Access, Secure Police and Government Internet Transit and Secure Connectivity Solutions are a practical solution in the transition from legacy PSN platforms and services towards Cloud First strategy and the Law Enforcement Community Network (LECN). Thales Lawful Intercept and Data Analytics enable Law Enforcement in their mission of public protection covering Lawful Interception (LI) data collection and mediation, in compliance with ETSI standards, to a wide set of monitoring probes and also includes a data analytics platform for turning raw data into actionable evidence.

All our services are designed to meet the stringent security requirements of public sector organisations, ensuring that sensitive data is protected at all times.

Find out more about what we have to offer on G-Cloud 14 here.

/sites/default/files/database/assets/images/2024-11/AdobeStock_860973519-Banner.png 18 Nov 2024 United Kingdom Thales has been selected as a supplier on the UK Government’s G-Cloud 14 framework, marking another significant milestone in the company's commitment to providing cutting-edge cyber secured cloud solutions to the public sector. Type News Hide from search engines Off

Thales strengthens Portugal’s very short-range air defence capabilities with ForceShield system

Thales strengthens Portugal’s very short-range air defence capabilities with ForceShield system prezly Mon, 11/18/2024 - 08:00 Thales and NATO Support and Procurement Agency (NSPA) have signed a contract for the supply of a ForceShield system to the Portuguese Army, which is designed to reinforce Portugal's very short-range air defence (VSHORAD) capabilities. It meets NATO require
Thales strengthens Portugal’s very short-range air defence capabilities with ForceShield system prezly Mon, 11/18/2024 - 08:00 Thales and NATO Support and Procurement Agency (NSPA) have signed a contract for the supply of a ForceShield system to the Portuguese Army, which is designed to reinforce Portugal's very short-range air defence (VSHORAD) capabilities. It meets NATO requirements, which encourages its members to increase their VSHORAD capabilities, improve theatre protection and defend against high-visibility threats. This ForceShield contract is the first contract for Thales in a European Union country and part of the long-term Military Programming Law (LPM), which defines the strategic framework for the modernization of the Armed Forces' in-service equipment. The ForceShield solution acquired includes a Ground Master 200 air surveillance radar, a radio communications system, Air Defence command-and-control centre ControlView together with portable weapon allocation terminals, RapidRanger vehicles with LMM and StarStreak effectors.
© Thales

Thales is pleased to announce the signing of a contract with NSPA, for the acquisition of a ForceShield system by the Portuguese Army. This is a landmark contract, not only because it is the first signed for the benefit of the Portuguese Army by Thales, but also because it is the first ForceShield contract that Thales has signed with a European country. The contract is designed to strengthen Portugal's very short-range air defence (VSHORAD) capabilities.

ForceShield is Thales's air defence solution, designed to protect forces, citizens and critical infrastructure from an increasingly diverse range of airborne threats - from low-altitude threats such as Unmanned Aircraft Systems (UAS) to helicopters, fighter ground attacks or cruise missiles.

ForceShield provides appropriate components to detect, coordinate, engage and neutralise threats. The contract includes a complete ForceShield Compact solution, with a Ground Master 200 air surveillance radar, a radio communications system, Air Defence command-and-control centre ControlView together with portable weapon allocation terminals. This air defence system includes as well RapidRanger vehicles with StarStreak high velocity Air Defence Missile and Lightweight Multirole Missile (LMM) anti-armour missiles. ​

The Ground Master 200 radar establishes tracks faster and keeps them locked for longer; hence maximising situational awareness for optimum engagement. This gives time to the ControlView to evaluate the threat, while allocating the appropriate firing units for air target engagement through each weapon terminal.

RapidRanger is a unique lightweight, vehicle based, highly automated system capable of delivering a rapid reaction response to air and surface threats. It will be fitted with StarStreak and LMM missiles.

“Thales is proud to contribute effectively to the capability enhancement of Portuguese Army. This first contract is a recognition of the trust on our air defence systems by NATO countries and all over the world to support nations safeguarding their airspace sovereignty”, Raphael DESI, Vice-President Integrated Airspace protection Systems, Thales.

About Thales in Portugal

Thales Portugal, with over 36 years of presence in the country, is a key contributor to Thales Group’s global operations. It operates two main centres of excellence: one for naval engineering, focused on developing combat management systems, and another for air traffic management solutions.

Thales employs over 400 people in Portugal, actively supporting the defence, aerospace, and cybersecurity sectors, while playing a significant role in advancing Portugal’s technological infrastructure.

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies specialising in three business domains: Defence & Security, Aeronautics & Space and Cybersecurity & Digital identity.

It develops products and solutions that help make the world safer, greener and more inclusive.

The Group invests close to €4 billion a year in Research & Development, particularly in key innovation areas such as AI, cybersecurity, quantum technologies, cloud technologies and 6G.

Thales has close to 81,000 employees in 68 countries. In 2023, the Group generated sales of €18.4 billion.

/sites/default/files/prezly/images/Generic%20banner%20option%202%20%282%29_10.png Documents [Prezly] 20241023_PR_Thales and the Portuguese Armed Forces sign their first contract for the ForceShield systems VF.pdf Contacts Cédric Leurquin 18 Nov 2024 Type Press release Structure Defence and Security Defence Thales is pleased to announce the signing of a contract with NSPA, for the acquisition of a ForceShield system by the Portuguese Army. This is a landmark contract, not only because it is the first signed for the benefit of the Portuguese Army by Thales, but also because it is the first ForceShield contract that Thales has signed with a European country. The contract is designed to strengthen Portugal's very short-range air defence (VSHORAD) capabilities. prezly_705634_thumbnail.jpg Hide from search engines Off Prezly ID 705634 Prezly UUID 3f15314c-88f6-4907-b758-cb589fd82394 Prezly url https://thales-group.prezly.com/thales-strengthens-portugals-very-short-range-air-defence-capabilities-with-forceshield-system Mon, 11/18/2024 - 09:00 Don’t overwrite with Prezly data Off

Sunday, 17. November 2024

KuppingerCole

Cyber Hygiene in the Age of AI

Matthias and Christopher discuss the critical importance of cyber hygiene in the corporate context, especially in light of evolving threats such as AI-driven attacks, deepfakes, and ransomware. They emphasize the need for organizations to train employees on recognizing and responding to these threats, as well as the role of technology in both perpetrating and preventing cybercrime. The discussion

Matthias and Christopher discuss the critical importance of cyber hygiene in the corporate context, especially in light of evolving threats such as AI-driven attacks, deepfakes, and ransomware. They emphasize the need for organizations to train employees on recognizing and responding to these threats, as well as the role of technology in both perpetrating and preventing cybercrime. The discussion also touches on the growing issue of disinformation and the necessity for vigilance in verifying information.



Friday, 15. November 2024

SC Media - Identity and Access

Chris Inglis: Why cybersecurity success hinges on strategic choices, not just tech

Speaking to SC Media at the Hybrid Identity Protection Conference, Inglis outlined how intentional choices can either fortify or undermine an organization’s defenses.

Speaking to SC Media at the Hybrid Identity Protection Conference, Inglis outlined how intentional choices can either fortify or undermine an organization’s defenses.

Friday, 15. November 2024

SC Media - Identity and Access

AI, hybrid identity, and cybersecurity's next frontier: Key takeaways from Semperis CEO at HIP conference

Bresman talks with SC Media about the role of artificial intelligence in identity and access management and also touches on the challenges of protecting data in hybrid environments.

Bresman talks with SC Media about the role of artificial intelligence in identity and access management and also touches on the challenges of protecting data in hybrid environments.


American Associated Pharmacies allegedly breached by Embargo ransomware

The Embargo ransomware group has claimed targeting U.S. independent pharmacy cooperative American Associated Pharmacies in an attack, which purportedly resulted in the theft of 1.469 TB of data, reports The Register.

The Embargo ransomware group has claimed targeting U.S. independent pharmacy cooperative American Associated Pharmacies in an attack, which purportedly resulted in the theft of 1.469 TB of data, reports The Register.


Holochain

The Holochain Foundation is Coming of Age

Organizational Shifts to Support Delivery

In the open-source world, there is a well-known dilemma: it’s very difficult to find funding for deep infrastructure-level projects. We knew, when envisioning Holochain, what we wanted to bring into the world — the capacity for groups of people to create digital spaces in which to engage without the need for any intermediaries or web servers. We wanted it to work with nothing but the computers of the very groups of people wanting to engage.

Well, that counts as a deep open-source infrastructure-level project. So we came up with a strategy to create a company that needed Holochain’s new infrastructural capacity, and could market its business proposition and plan. But instead of being owned by venture capital, it would be owned by an open-source foundation on behalf of all the eventual users of that infrastructure.

That vision led us to launch Holo as a distributed cloud hosting company for Holochain apps, that would also use Holochain itself to manage that cloud infrastructure, and do the value accounting between hosts and app providers. This was a complex and tall order.  And so for the past years, all of us in the Holo/Holochain world have been driving primarily to meet Holo’s needs, as we’ve been implementing Holochain itself. Admittedly, it’s taken us significantly longer than we initially thought to build out the depth of the Holochain feature set, along with the complex infrastructure that Holo needed to offer generalized Holochain-based cloud hosting. It’s been hard going. In the meantime, the world has also changed around us, revealing new demands of where and how Holochain actually wants to be used in today’s market, demands that are different from what we initially envisioned.

To meet these changes, and to increase our capacity to deliver, we are updating our strategy, which means a significant organizational restructuring. You can read the announcement of this restructuring on the Holo.host press site.

But what does this mean more specifically for the Holochain Foundation? Foremost, the Foundation will move from being a passive holder of the Holochain intellectual property, on behalf of the community, into being the active operational entity supporting and managing the Holochain development team.

Part of our “coming of age” is realizing that we can’t do everything we might like. Focus matters. Our strategic plan for delivering on our mission of “fostering the evolution and thriving of the Holochain framework and related ecosystems”, begins with just one thing: The stability and reliability of Holochain, such that it can be deployed as industrial-strength, mission-critical infrastructure by commercial projects as well as our community stakeholders. What this means in practice is testing, testing, testing! This includes:

Continued build-out of our “Wind Tunnel” performance testing framework so that we can verify that Holochain’s operating envelope, across each one of its features, meets or exceeds the demands of the specific projects currently bringing Holochain applications to market. Ensuring the sufficiency of testing code coverage of each of Holochain’s key features and undertaking any refactors necessary to bring them in line with our stakeholder’s needs. Upgrading our release patterns so that stakeholders delivering mission-critical apps can conditionally enable the more experimental and not-fully-tested features, while those stakeholders who are on the bleeding edge can help develop and test those very features.

We believe that the Holochain Foundation’s coming-of-age shifts us into delivering on our mission by serving our stakeholders better. We are deeply committed to our partners and ecosystem stakeholders that are currently delivering or developing Holochain apps, both the newer ones like the Volla and its Messages app for the new Quintus Volla Phone, data provenance solutions from Kwaxala, verified data for semi-fungbile token solutions including the recently released Jade City vaults, decentralized game data for esports, the new Visible Verification project, and the a project with Carbistry in the voluntary carbon market domain; and also to those partners who’ve been with us for a long time, like Darksoil Studio, Humm Hive, Neighbourhoods, Coasys, Lightningrod Labs, Carbon Farm Network, Valueflows, and others; and of course, supporting the features needed by the reorganized Holo and HoloFuel organizations. We are also preparing for larger-scale adoption of Holochain solutions in 2025, including collaborations with major players in the industrial supply chain and media industries.

As part of the reorganization and living into our commitment to focus, I will be stepping in as Executive Director of the Holochain Foundation. This will allow our current Executive Director, Mary Camacho, to concentrate on Holo’s new direction, as well as her passion of enabling commercial projects both directly and via new structures within the Foundation.

There’s much more in store for the Foundation going forward, especially around expanded and more formal structures for stakeholder involvement and engagement in Holochain’s development. This will take us a while to roll out, but you can expect more details in the next months. If your work depends on Holochain and you have feedback for what you would like to see from the Foundation going forward, please email me and Paul at foundation2024@holochain.org.

Thanks for being with us as we grow up and into our next phase.

– Eric Harris-Braun

PS: We’ve been working on an update to the Holochain White Paper this year, and it’s finally published! It makes the claim for a practical Byzantine fault tolerant system for everyday use, as distinct from systems that are robust but costly in practice. It’s accompanied by another paper, Players of Ludos, which tells the story of how Holochain works through the activities of a group of nomadic board game players.

Cover photo by Anton Sobotyak on Unsplash


KuppingerCole

Jan 21, 2025: Navigating the End of SAP IDM: Future-Proofing Identity Security and Compliance

The impending end-of-life for SAP Identity Management (IDM) presents a critical juncture for organizations relying on this solution. As support winds down by 2027, with extended maintenance until 2030, businesses face urgent challenges in maintaining robust identity and access management frameworks. This transition period offers a unique opportunity to modernize and unify identity security and gove
The impending end-of-life for SAP Identity Management (IDM) presents a critical juncture for organizations relying on this solution. As support winds down by 2027, with extended maintenance until 2030, businesses face urgent challenges in maintaining robust identity and access management frameworks. This transition period offers a unique opportunity to modernize and unify identity security and governance strategies.

IDnow

Fraud in 2024: IDnow customers have their say.

We explore some of the challenges our customers have faced this year and how they plan to tackle fraud in 2025. By the end of this year, more than 70 billion identity verification checks will have been made. In a world of just 8 billion, these numbers appear absolutely staggering.   However, when you consider how […]
We explore some of the challenges our customers have faced this year and how they plan to tackle fraud in 2025.

By the end of this year, more than 70 billion identity verification checks will have been made. In a world of just 8 billion, these numbers appear absolutely staggering.  

However, when you consider how frequently people have their identity verified in this ‘always on, always connected’ world, the number is perhaps not as high as it would originally seem. 

Nowadays, people have their identity verified and reverified without giving it much thought, undergoing data and document checks and age verification to use most digital services. In the not-too distant past, if you wanted to open a bank account, rent a vehicle or use a particular service, you would invariably be required to visit a brick-and-mortar store, clutching at least two forms of paper identification. Even then, the process was unlikely to conclude on the same day, with prospective customers often needing to wait a further series of days until their identity could be verified and they could access said service.  

Nowadays, thanks to a range of automated and in-person identity verification services, this can be done in a matter of minutes, affording unrivalled convenience that many would have thought impossible just a decade ago. Striking a balance between offering an identity verification process that is secure for the business but convenient for the customer is essential. Without it, the business runs the risk of fraud attacks, which can impact its reputation and bottom line and ultimately lead to customer abandonment. 

To discover the challenges that our customers have been facing in 2024 and what next year may look like, we conducted the inaugural IDnow Customer Survey 2024, featuring a number of clients across the UK, France and Germany. Respondents held a variety of positions from Product Managers to Head of Compliance.

Top 3 identity verification challenges in 2024. Increasing operational efficiencies and cutting costs (53%). 
  Keeping conversion rates high (47%). 
  Managing the volume and wide range of different types of fraud attacks / Keeping up with technological developments in fraud and identity verification (both 41%). How our customers tackled fraud in 2024.

The costs of a business falling victim to fraud go way beyond financial. Yes, fraud impacts the bottom line, but it can also have a disastrous effect on company reputation and lead to customers losing trust in the brand.    

To safeguard against this and prevent fraud, six out of 10 of our customers said they had conducted training sessions to enable staff to better identify internal and external fraud risks, while 53% said they had invested in new fraud prevention technologies. Just over a third (35%) said they had deployed multi-layered identity verification procedures, including data, biometric and database checks, such as for PEPs and Sanctions.

UK Fraud Awareness Report 2024 Learn more about the British public’s awareness of fraud and their attitudes toward fraud-prevention technology. Read now Preparing for fraud challenges in 2025.

When asked what the biggest fraud challenge for the year ahead was, an equal number of respondents (59%) cited reputational damage from fraud attacks and the financial cost of tackling and managing fraud. This was followed very closely by just over half (53%) who said they were concerned about how a lack of consumer awareness could lead to increased fraud risks.  

Regarding the types of fraud that customers were most concerned about, 24% of businesses seemed to be most worried about social engineering, such as phishing, while around the same number cited ID document forgery and manipulation. To a lesser extent, customers said that money mules and identity theft (both 18% each) were the primary fraud challenges in 2025. Interestingly, just 12% cited deepfake attacks (despite it becoming an increasingly commonplace method), while just 6% of respondents cited insider threats as the top fraud challenge for 2025. 

When asked how they planned to fight fraud in 2025, the majority of respondents ranked effective training and upskilling staff as the most important action to be taken, followed by access to AI technologies. Internal appointment of new people responsible for fraud fighting and risk mitigation was considered the least important action to take. 

Interestingly, while some businesses have already deployed multi-layered anti-fraud solutions this year, a large majority of businesses expect it to be very important (70%) and somewhat important (12%) going forward. Only 6% claimed that it was not important to them at all. 

At IDnow, we recognize the importance of keeping up to date with the latest developments and techniques in fraud and run regular training sessions and courses for our clients. 

To learn more about how our industry-leading fraud prevention technology can help you fight fraud to safeguard against fake IDs, synthetic identities, deepfakes, social engineering, money mules and more, check out our blog on the role of identity verification in the fight against fraud.

Or for more insights from industry insiders and thought leaders from the world of fraud and fraud prevention, check out one of our interviews from our Spotlight Interview series below.

Jinisha Bhatt, financial crime investigator Paul Stratton, ex-police officer and financial crime trainer Lloyd Emmerson, Director of Strategic Solutions at Cifas Or, discover all about the rise of social media fraud, and how one man almost lost a million euros to a pig butchering scam in our blog, ‘The rise of social media fraud: How one man almost lost it all.’

By

Jody Houton
Senior Content Manager at IDnow
Connect with Jody on LinkedIn


KuppingerCole

Passwordless Authentication for Enterprises and Consumers: HID​

by Alejandro Leal The password is a remnant of an era before hacking and credential-based attacks became a widespread problem. Although the internet has changed significantly since the early days, passwords have only become longer and more complicated. In parallel, cybercriminals have targeted operating systems with increasing sophistication and frequency as computers have become more accessible w

by Alejandro Leal

The password is a remnant of an era before hacking and credential-based attacks became a widespread problem. Although the internet has changed significantly since the early days, passwords have only become longer and more complicated. In parallel, cybercriminals have targeted operating systems with increasing sophistication and frequency as computers have become more accessible worldwide. For years, IT professionals have discussed the idea of eliminating passwords because they can easily be compromised. In addition, passwords can be costly, time-consuming, and difficult to manage, often resulting in poor user experience. Furthermore, the fact that password reuse is a common practice among customers and employees, only exacerbates the problem. In the context of Customer Identity and Access Management (CIAM), passwordless authentication solutions should have features and capabilities to detect, prevent, and minimize fraudulent activities and unauthorized access within an organization. Effective fraud prevention measures are crucial for protecting both the financial and reputational assets of a business. Passwordless authentication solutions should also support a variety of consumer devices, including smartphones, tablets, laptops, and desktop computers, ensuring seamless access across different platforms and operating systems.

PingTalk

Verifiable Credentials in Decentralized Identity

Understanding API and automated credentials and how they related to decentralized identity 

It’s an exciting time in the world of digital identity. We’re witnessing the convergence of user identification, authentication, and authorization in the palm of our hand – through our biometrically secure mobile devices and digital wallets. As an identity provider taking part in this paradigm shift (commonly referred to as decentralized identity), understanding the types, configuration, and ecosystem of verifiable credentials is crucial. Let’s start with some definitions.

Thursday, 14. November 2024

KuppingerCole

Understanding the Impact of AI on Securing Privileged Identities

Understanding the impact of AI on securing privileged identities has become a critical concern in today's rapidly evolving cybersecurity landscape. As artificial intelligence continues to advance, it presents both opportunities and challenges for organizations striving to protect their most sensitive access points. The rise of AI-powered threats has significantly altered the traditional identity a

Understanding the impact of AI on securing privileged identities has become a critical concern in today's rapidly evolving cybersecurity landscape. As artificial intelligence continues to advance, it presents both opportunities and challenges for organizations striving to protect their most sensitive access points. The rise of AI-powered threats has significantly altered the traditional identity attack chain, requiring a fundamental shift in how we approach privileged identity security.

To combat these emerging threats, organizations must leverage cutting-edge technologies and adopt innovative strategies. By implementing AI-driven security solutions, companies can enhance their ability to detect and respond to sophisticated attacks targeting privileged identities. These advanced systems can analyze vast amounts of data in real-time, identifying anomalous behavior and potential security breaches before they escalate. Additionally, machine learning algorithms can continuously adapt and improve security measures, staying one step ahead of evolving AI-powered threats.

Martin Kuppinger, Principal Analyst at KuppingerCole, will provide expert insights into the changing landscape of privileged identity security in the age of AI. He will discuss the latest trends in AI-driven threats, their impact on the identity attack chain, and offer strategic recommendations for organizations to strengthen their security posture. Martin will also explore the potential of AI as a defensive tool and how it can be leveraged to enhance privileged access management.

Morey J. Haber, Chief Security Advisor at BeyondTrust will share practical experiences and best practices for safeguarding privileged identities against AI-powered threats. He will present three key tips that organizations can implement to protect themselves from emerging AI-driven attacks. Morey will also discuss real-world case studies demonstrating successful strategies for integrating AI into existing security frameworks to bolster privileged identity protection.




TBD on Dev.to

What is Web5?

Web 5 is a decentralized platform that provides a new identity layer for the web to enable decentralized apps and protocols. In the current web model, users do not own their data or identity. They are given accounts by companies and their data is held captive in app silos. To create a new class of decentralized apps and protocols that put individuals at the center, we must empower them wit

Web 5 is a decentralized platform that provides a new identity layer for the web to enable decentralized apps and protocols.

In the current web model, users do not own their data or identity. They are given accounts by companies and their data is held captive in app silos. To create a new class of decentralized apps and protocols that put individuals at the center, we must empower them with self-owned identity and restore control over their data.

Components of Web 5

There are three main pillars of the decentralized web platform, all of which are based on open standards.

Decentralized Identifiers

The identifiers we know and use today are owned by the government, a company, an organization, or some other intermediary. For example, our email addresses and social media handles are identifiers associated with us but are owned and controlled by the service providers. These companies have the right to ban, disable, or delete these identifiers and we have little to no control over this.

So before we can realize truly decentralized applications, we need decentralized identifiers that users own and control. This removes the dependency on centralized entities to authenticate and represent us.

​​Decentralized Identifiers (DIDs) are a W3C standard. They have a standardized structure that essentially links to you and your information.

They are a long string of text that consists of three parts:

the URI scheme identifier, which is did the identifier for a DID method the DID method-specific identifier

DIDs are the only component of Web5 that touch a blockchain, which is generally limited to anchoring the keys/endpoints linked to the ID.

That being said, anchoring DIDs on Bitcoin (or any blockchain) is not a requirement. In fact, what's great about having the standardized formatting for DIDs is that they can be anchored anywhere or not anchored at all and this still works, although with varying levels of decentralization.

Here are examples of DIDs on the Bitcoin blockchain, the Ethereum blockchain, and the web. Notice they all use the same format: scheme, DID method, and DID method-specific identifier.

did:btcr:xyv2-xzpq-q9wa-p7t did:ens:some.eth did:web:example.com

Because personal data is not stored on the blockchain, the DID essentially acts as a URI that associates the subject of the DID (the person, company, or object being identified) with a DID document that lives off-chain.

DID Documents are JSON files stored in decentralized storage systems such as IPFS, and describe how to interact with the DID subject. The DID Document contains things like the DID subject's public keys, authentication and verification methods, and service endpoints that reference the locations of the subject’s data.

{ "@context": "https://www.w3.org/ns/did/v1", "id": "did:ion:EiClkZMDxPKqC9c-umQfTkR8", "verificationMethod": [ { "id": "did:ion:EiClkZMDxPKqC9c-umQfTkR8", "type": "Secp256k1VerificationKey2018", "controller": "did:ion:EiClkZMDxPKqC9c-umQfTkR8" } ], "authentication": ["did:ion:EiClkZMDxPKqC9c-umQfTkR8"] } Verifiable Credentials

Verifiable Credentials are a fully ratified W3C standard that work hand in hand with Decentralized Identifiers to enable trustless interactions - meaning two parties do not need to trust one another to engage, but claims made about a DID subject can be verified.

For example, Alice needs to prove she has a bank account at Acme Bank. Acme Bank issues a cryptographically signed Verifiable Credential which would be stored in Alice's identity wallet.

The credential contains the issuer as Acme and the subject as Alice, as well as the claims, which are Alice's account number and full name.

Upon request for proof of banking, Alice presents the Verifiable Credential that's cryptographically signed by both Alice as well as her bank.

This is an easy, machine-readable way to share credentials across the web. The Verifier does not know or trust Alice, but they do consider Acme trustworthy, and they have essentially vouched for Alice therefore distributing trust.

Decentralized Web Nodes

Today, centralized entities act as our data stores. Applications hold all of our content and preferences on their servers.

Decentralized Web Nodes (DWNs) change this by allowing us to decouple our data from the applications that we use, and instead host our data ourselves in our own personal data stores.

BlueSky is a good example; it's a decentralized social media app. With BlueSky, your tweets and your connections aren't stored with the application. They are stored with you. So you can present your content on any decentralized social media app you want, not just BlueSky.

Your DWNs can hold both public and encrypted data. For example, in the case of a decentralized social media app, you'd want data like your posts and your connections to be public but things like your DMs to be private.

Your decentralized web nodes do not live on the blockchain. You can host your web nodes anywhere (your phone, computer, etc) and can replicate them across your devices and clouds and all data will be synced.

While self-hosting your DWNs provides a means for decentralizing your data, we recognize some users will be more comfortable with others hosting their web nodes for convenience sake. We envision there will be vendors offering to host your web nodes for you. The good part about that is you can encrypt any private data so unlike today where cloud hosts can scan everything that you host there, you can still maintain some privacy even if you have your web nodes hosted by intermediaries.

Your DWNs are associated with your Decentralized Identifiers and are listed in a DID document.

Notice the serviceEndpoint section of the DID doc specifies service endpoints and provides URIs to the decentralized web nodes.

{ "@context": "https://www.w3.org/ns/did/v1", "id": "did:web:example.com:u:alice", "service": [ { "id": "#dwn", "type": "DecentralizedWebNode", "serviceEndpoint": { "nodes": ["https://dwn.example.com", "00:11:22:33:FF:EE"] } } ], "verificationMethod": [ { "id": "did:web:example.com:u:alice", "type": "Secp256k1VerificationKey2018", "controller": "did:web:example.com:u:alice" } ], "authentication": ["did:web:example.com:u:alice"] }

Given an application has the address to your DWN, they can send you a request for data.

This represents a request from an application to obtain all objects within a DWN that follow the SocialMediaPosting schema:

POST https://dwn.example.com/ BODY { "requestId": "c5784162-84af-4aab-aff5-f1f8438dfc3d", "target": "did:example:123", "messages": [ { "descriptor": { "method": "CollectionsQuery", "schema": "https://schema.org/SocialMediaPosting" } }, {...} ] }

The data within DWNs are JSON objects that follow a universal standard, thus making it possible for any application to discover and process the data given its semantic type.

If this data is public, those objects will be automatically returned to the application, and if the data is private, the node owner would need to grant the application access to that data.

Identity Wallets

Obviously all of this is pretty complicated, especially for non-technical users. So we need a simplistic, easy to use interface that will allow people to access and manage their identity.

A well designed identity wallet would provide ways to manage the data stored in decentralized web nodes, the decentralized IDs and the context in which they should be used, verifiable credentials, and authorizations.

Decentralized Web Apps

Web 5 enables developers to build decentralized web applications (DWAs) on top of it and it’s all open source! You're free to use it as your foundation and focus your attention on what you really care about, your app. Web5 brings to DWAs what cloud and application servers bring to enterprise apps. It does the hard part. It brings decentralization. By building your apps on top of Web 5, you get decentralization and identity and data management as part of the platform.

This is definitely a fundamental change in how we exchange data, but it's not a total overhaul of the web we already know. This works like Progressive Web Apps, but you'd add the decentralized web node SDK and then applications are free to really go serverless because the data isn't stored with them.

The sky's the limit to what you can build on top of this platform, but here are some cool basic examples.

Music Applications

No one likes recreating their music playlists over and over again for different apps. With Web 5, you wouldn't have to do that.

In this example, Groove has access to write to Alice's decentralized web node and adds a new entry.

Tidal has access to read from Alice's DWN, so can read the new entry that was added by Groove, and now Alice has her playlist readily available on both apps.

With the continuous utilization of the data across apps, not only do Groove and Tidal get access to Alice's data, but they use it to improve her user experience, thus creating a stronger experience than Alice could have ever gotten had she not used this tech.

Travel Applications

Your travel preferences, tickets, and reservations are scattered across so many different hotels, airlines, rental car agencies and travel apps, making it really difficult to coordinate. Heaven forbid there's any hiccup in the system such as a delayed flight. You end up trying to get in touch with the car rental place to let them know you'll be late for your reservation, and if it's really late, you'd want to call the hotel to ask them not to give away your room. All while you're hustling and bustling at the airport.

Web 5 can help unify these various app experiences.

If Alice gives the hotel, the airline, and the rental car agency access to the Reservation and Trip objects in her DWN, they can react and adjust accordingly to any changes made.

These are just a few applications that can be realized by building on top of Web 5. There's so many more possibilities once the web is truly decentralized the way it was always intended to be.


California DMV Hackathon Win: Privacy-Preserving Age Verification

At the recent California DMV Hackathon, the Block team, represented by members from Square and TBD, won the Best Privacy & Security Design award for building a prototype of an instant age verification system. This solution utilizes mobile drivers’ licenses (mDLs) to provide secure, privacy-centric transactions for age-restricted purchases with Square’s Point of Sale (POS) system. In this po

At the recent California DMV Hackathon, the Block team, represented by members from Square and TBD, won the Best Privacy & Security Design award for building a prototype of an instant age verification system. This solution utilizes mobile drivers’ licenses (mDLs) to provide secure, privacy-centric transactions for age-restricted purchases with Square’s Point of Sale (POS) system.

In this post, we’ll explore the core technical components behind our solution, which centered on using TruAge technology to enable seamless, secure age verification.

How TruAge QR Code Verification Works

At the heart of our prototype is the ability to scan and verify a TruAge Age Token QR code. These QR codes contain a verifiable credential (VC) that confirms a person’s legal age without exposing unnecessary personal information. Here’s a breakdown of how we approached verifying these credentials in our solution.

Decoding the QR Code Payload

The first step in the verification process was reading the QR code provided by the customer. TruAge QR codes follow a standard format which encodes the verifiable presentation (VP) in a compact CBOR format.

Our team implemented a scanner using our open source web5-swift SDK that reads the QR code and decodes the CBOR-encoded payload. This CBOR format is efficient, allowing the verifiable presentation to be transmitted and processed quickly, minimizing any delays at the point of sale.

Converting CBOR to JSON

Once we decoded the CBOR data, the next step was to parse it into a JSON-based verifiable presentation using the W3C Verifiable Credentials (VC) Data Model v1.1. This model is critical to ensuring interoperability across different platforms and services, as it standardizes how credentials are represented and exchanged in a decentralized manner.

Validating the Issuer’s DID

After converting the data into a verifiable format, we needed to validate the digital signature on the credential. We retrieved the issuer’s Decentralized Identifier (DID) from the TruAge server, which provided us access to a sandbox environment containing their list of authorized DIDs.

Using DIDs, we were able to validate the cryptographic signature to ensure that the credential was issued by a trusted TruAge provider. This validation step is critical for ensuring that the credential has not been tampered with and is issued by a legitimate authority.

Credential Content Verification

Once the issuer’s signature was validated, the next step was to check the contents of the verifiable credential itself. In this case, we looked for proof that the individual was over 21 and verified that the credential had not expired.

This lightweight verification process ensures that businesses can quickly and easily confirm a customer’s legal age, while protecting their privacy by not exposing sensitive information like birthdates or addresses.

Building the Integration: Web5 and TruAge Libraries

To bring this solution to life, we used a few key technologies:

iOS: Our team developed the iOS implementation using the web5-swift library, which allowed us to efficiently handle the scanning, decoding, and parsing of the TruAge QR codes on Apple devices.

Android: For Android, we modified the TruAge library provided by Digital Bazaar to make it compatible with our solution. This involved adapting the library for seamless integration with our QR code parsing and validation logic.

Privacy and Security at the Forefront

Our approach ensures that personal information is protected at every stage of the transaction. By focusing solely on verifying the specific data point needed (in this case, whether someone is over 21), we avoid collecting or storing any unnecessary information. This is a win for both businesses and consumers, as it minimizes risk while maintaining a smooth user experience.

By integrating this technology into Square’s Retail POS system, we not only enhanced security but also brought innovative, privacy-preserving solutions to small businesses that need to comply with age verification laws. This prototype has the potential to extend to many other use cases, from secure employee onboarding to identity verification for suppliers and customers.


KuppingerCole

Privileged Access Management (PAM)

by Paul Fisher PAM is crucial for securing privileged access to critical resources, reducing the risk of breaches and insider threats. The market has seen rapid growth with the rise of cloud adoption, digital transformation, and the proliferation of identities across various platforms. Both established vendors and newer entrants are vying for market share, with some focusing on comprehensive ident

by Paul Fisher

PAM is crucial for securing privileged access to critical resources, reducing the risk of breaches and insider threats. The market has seen rapid growth with the rise of cloud adoption, digital transformation, and the proliferation of identities across various platforms. Both established vendors and newer entrants are vying for market share, with some focusing on comprehensive identity security platforms and others offering specialized point privileged access solutions.

Ocean Protocol

DF115 Completes and DF116 Launches

Predictoor DF115 rewards available. DF116 runs Nov 14— Nov 21th, 2024 1. Overview Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor. Data Farming Round 115 (DF115) has completed. DF116 is live today, Nov 14. It concludes on November 21st. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE&nb
Predictoor DF115 rewards available. DF116 runs Nov 14— Nov 21th, 2024 1. Overview

Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor.

Data Farming Round 115 (DF115) has completed.

DF116 is live today, Nov 14. It concludes on November 21st. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF115 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF116

Budget. Predictoor DF: 37.5K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and Predictoor

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF115 Completes and DF116 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Thales Group

2024 Capital Markets Day: Thales, a global technology leader in Defence, Aerospace and Cyber & Digital

2024 Capital Markets Day: Thales, a global technology leader in Defence, Aerospace and Cyber & Digital prezly Thu, 11/14/2024 - 07:00 Thales has successfully reinforced its business portfolio and delivered on its operational commitments over the last 5 years Defence reinforced as a core market Aerospace strengthened both organically and through M&A Cyber & Dig
2024 Capital Markets Day:
Thales, a global technology leader in Defence, Aerospace and Cyber & Digital prezly Thu, 11/14/2024 - 07:00

Thales has successfully reinforced its business portfolio and delivered on its operational commitments over the last 5 years

Defence reinforced as a core market Aerospace strengthened both organically and through M&A Cyber & Digital scaled as a new core technology segment Ground Transportation divested Robust commercial performance Solid increase in profitability Outstanding cash flow performance

Building on its unique technological platform, the Group will implement the following strategic priorities

Leverage premium portfolio to deliver profitable growth Reinforce premium positioning Differentiate through disruptive technology Enhance employer attractiveness Strengthen its ESG leadership

Thales is setting new financial targets for the 2024-28 period

Organic sales growth rate (CAGR over 2024-2028, base year 2023) of +5-7%1 EBIT margin improvement to 13-14% in 20282 Average FOCF3 conversion of 95-105%

Thales (Euronext Paris: HO) is hosting today its 2024 Capital Markets Day with investors and financial analysts, in-person and through a live webcast accessible via the following link.

Following the acquisitions of Gemalto, Imperva and Cobham AeroComms, Thales has successfully transformed its business portfolio into a unique global technology enabled Defence, Aerospace, and Cyber & Digital company, with strong and differentiated leadership positions across businesses.

Patrice Caine, Chairman and Chief Executive Officer, Pascal Bouchiat, Senior Executive Vice-President, Chief Financial Officer, and members of the Executive Committee of Thales will provide details of the Group’s key new strategic priorities, medium-term financial objectives and the characteristics that make Thales’ unique positioning, able to deliver accelerated long-term profitable growth.

“Since our last Capital Markets Day in 2019, Thales has successfully navigated an unprecedented and challenging geopolitical environment. I would like to thank all our teams for their continuous commitment and our customers for their trust. Through these times, we have improved the quality of our businesses with active portfolio management, strengthening our core Defence and Aerospace portfolio while transforming and scaling up our Cyber & Digital business.
​The strong platform we have built with unique leadership positions across our three markets, our ability to innovate and anticipate technological disruptions and our strong pipeline of new premium products & services enable us to look forward to the next chapter of accelerated and sustainable growth with confidence.
​I am delighted to be today with investors and financial analysts and set out our roadmap to deliver attractive, profitable growth over the next five years, with our strengthened portfolio. I look forward to interacting with our key stakeholders.”
​Patrice Caine, Chairman & Chief Executive Officer

Strategic priorities

Looking ahead, Thales will implement the following strategic priorities:

Leverage premium portfolio to deliver profitable growth, building on our leadership position in fast growing segments. 80-90% of Thales’ revenues are exposed to fast growing markets. Thales’ long-term visibility on these market segments along with a diversified customer base provides confidence in Thales’ ability to execute on its growth ambition. Reinforce premium positioning, combining notably delivery excellence, operational superior performance, customer driven innovation, user-friendly design and experience and best in class after-sales’ support. Thales will continue to differentiate itself through innovation in these fields that deliver higher value to customers driving growth in our market share and better pricing. Differentiate through disruptive technology, embracing technology disruptions and staying ahead of the competition, with ~€4bn allocated to research and development (€5bn by 2028). This critical mass enables the Group to work on a wide spectrum of technology and projects, such as trustworthy AI, 6G, and quantum technology. This will enable Thales to strengthen its undisputed technological leadership in the future. Reinforce employer attractiveness, by investing in attracting and retaining the best talents everywhere and becoming a Learning Company. The Group’s purpose, thorough leadership, and partnerships with top universities, academies, and renowned scientists set it apart. Its contribution to major societal challenges makes it the destination of choice for the brightest minds. Strengthen its ESG leadership, delivering on its ambition to become an ESG leader and protecting our societies, people, and the planet. Thales unique portfolio of solutions can help solve some of the major societal issues and build a future we can all trust. Thales has joined the CAC 40 ESG Index in September 2024 and will unveil its new 2030 flagship ESG objectives in 2025.

A clear roadmap for each business

Relying on 1) the execution of each of the above strategic priorities, 2) the unprecedented visibility Thales is currently benefiting from across its portfolio and 3) solid growth prospects, the Group has set up a clear and ambitious 2028 roadmap for each of its businesses:

In Defence: Thales intends to leverage its extended delivery capabilities to capture high market growth. Sales organic growth CAGR4 over 2024-2028 should reach +6 to 7% and EBIT margin 13%. In Avionics: the Group will grasp the advantage of an enhanced state of art product portfolio to address strong market demand. Sales organic growth CAGR4 over 2024-2028 should reach +5 to 7% and EBIT margin 13 to 14% in 2028. In Space: Thales is focused on restoring the business profitability to exceed the Group’s WACC, while considering selective business opportunities. Sales organic growth CAGR4 over 2024-2028 should reach +2% and EBIT margin 7%+ in 2028. In Cyber & Digital: the Group intends to leverage the unique product offering with best-in-class solutions to reinforce Thales leadership on this fast-growing market. Sales organic growth CAGR4 over 2024-2028 should reach +6 to 7% and EBIT margin 16 to 17% in 2028.

2024-2028 Group financial targets

Over the last five years, Thales has built a stronger and clearer portfolio, while delivering on its operational commitments, leading to higher profitability and further cash generation. Strong commercial performance has led to a record backlog, which should represent close to four years of revenue at the end of 2024 in Defence, providing unequalled visibility into the future. The business portfolio provides a solid base to address the next decade of market growth, deliver on our full potential and maximize value creation thanks to higher added value, technology driven businesses.

Based on this current view and assuming no major changes in the macro-economic and geopolitical environment, and stability of tax regulation in its key geographies, Thales announces today its medium-term financial targets as follows:

Organic sales growth of +5-7% per year on average over the 2024-28 period, driven by broad-based growth across businesses. Thales’ strong market position, increased production capacity and premium positioning, will enable the Group to meet the growing market demand and accelerate organic growth in Defence, leverage market demand in Avionics and benefit from the structural growth opportunities in Cyber & Digital. EBIT margin improvement to 13-14% in 2028. EBIT margin improvement will be driven by multiple levers, including volume growth, premiumization of products & services, and cost efficiency, while increasing R&D investments to drive innovation. This will be spread across business segments, meaning maintaining best in class profitability within Defence, margin improvement in Aerospace, primarily driven by recovery in Space margins to 7%+ in 2028, and further margin development within Cyber & Digital, including Imperva.

Thales will pursue the integration of Imperva and Cobham AeroComms, to deliver the expected synergies, revenue and profitability.

The above will drive adjusted EPS growth of 50-60% over the 2024-2028 period.

Thales will maintain a high cash conversion rate, which should stand between 95 and 105% on average over the 2024-2028 period.

Thales will continue to operate with an active capital allocation strategy to maximize shareholder value, prioritizing organic growth and deleveraging, maintaining a dividend payout of ~40%5 and strengthening the Group portfolio with selective acquisitions that meet high business and financial thresholds. The company will consider share buy back to prevent excessive deleveraging and if the Group’s valuation suggest it.

*****

This press release contains certain forward-looking statements. Although Thales believes that its expectations are based on reasonable assumptions, actual results may differ significantly from the forward-looking statements due to various risks and uncertainties, as described in the Company's Universal Registration Document, which has been filed with the French financial markets authority (Autorité des marchés financiers – AMF).

1 “Organic” stands for “at constant scope and exchange rates”.

2 Non-GAAP financial indicators, see definitions in the appendices, page 6.

3 Defined as the conversion ratio of adjusted net income into free operating cash flow.

4 CAGR over 2024-2028, base year 2023

5 Subject to Board of Directors’ approval.

/sites/default/files/prezly/images/sans%20A-1920x480px_52.jpg Documents [Prezly] PR_2024 Capital Markets Day_14November2024.pdf Contacts Cédric Leurquin 14 Nov 2024 Type Press release Structure Investors Group Thales has successfully reinforced its business portfolio and delivered on its operational commitments over the last 5 years prezly_704973_thumbnail.jpg Hide from search engines Off Prezly ID 704973 Prezly UUID a43b78b2-309e-4f28-b236-ef875f6e1291 Prezly url https://thales-group.prezly.com/thales-2024-capital-markets-day-a-global-technology-leader-in-defence-aerospace-and-cyber--digital Thu, 11/14/2024 - 08:00 Don’t overwrite with Prezly data Off

Wednesday, 13. November 2024

KuppingerCole

Cloud Backup for AI Enabled Cyber Resilience

Organizations and society have become dependent upon digital services which has increased the business impact of cyber threats and hence the need for cyber resilience. Organizations need to take steps beyond preventing cyber-threats from impacting their digital infrastructure – they must also be able to respond to and recover when incidents occur.  Data backup solutions are an essential eleme

Organizations and society have become dependent upon digital services which has increased the business impact of cyber threats and hence the need for cyber resilience. Organizations need to take steps beyond preventing cyber-threats from impacting their digital infrastructure – they must also be able to respond to and recover when incidents occur.  Data backup solutions are an essential element of every organization’s cyber resilience plan.

In the webinar, Mike Small, Senior Analyst at KuppingerCole Analysts, will look at the status and future of Data Backup, on what organization should consider when defining their own approach cyber resilience, and what the vendor landscape looks like. He will discuss different requirements for Data Backup and solutions in the market meet these.




auth0

Demystifying Multi-Tenancy in a B2B SaaS Application

Why a Multi-Tenant approach is fundamental to B2B SaaS, and how using Auth0 and the Auth0 Organizations feature can help implement it.
Why a Multi-Tenant approach is fundamental to B2B SaaS, and how using Auth0 and the Auth0 Organizations feature can help implement it.

SC Media - Identity and Access

Regulatory reforms slow to hinder Italy’s spyware boom

One such spyware firm, RCS Labs, gained attention after its spyware, Hermit, was discovered on devices in Kazakhstan following protests.

One such spyware firm, RCS Labs, gained attention after its spyware, Hermit, was discovered on devices in Kazakhstan following protests.


US doubles down support for UN cybercrime treaty

While several countries have expressed concerns about the potential exploitation of the treaty to curtail human rights and strengthen extraterritorial surveillance, implementing the treaty with appropriate safeguards could prove beneficial in combating increasingly sophisticated cybersecurity threats.

While several countries have expressed concerns about the potential exploitation of the treaty to curtail human rights and strengthen extraterritorial surveillance, implementing the treaty with appropriate safeguards could prove beneficial in combating increasingly sophisticated cybersecurity threats.


Instagram purportedly subjected to widespread data scraping

More than 100 records shared by the hacker revealed the scraping of usernames, names, email addresses, biographies, follower and following counts, external URLs, and locations, as well as targeted usernames, user IDs and scrape IDs, account creation dates, and account categories.

More than 100 records shared by the hacker revealed the scraping of usernames, names, email addresses, biographies, follower and following counts, external URLs, and locations, as well as targeted usernames, user IDs and scrape IDs, account creation dates, and account categories.


Indicio

Verifiable credentials mature with product launches, implementations

Biometric Update The post Verifiable credentials mature with product launches, implementations appeared first on Indicio.

SC Media - Identity and Access

Permiso releases 3 open-source cloud threat detection tools

The first tool is called DetentionDodger and scans CloudTrail logs to identify vulnerabilities related to leaked credentials to help organizations detect policy attachment failures and analyze user privileges that could be exploited by threat actors.

The first tool is called DetentionDodger and scans CloudTrail logs to identify vulnerabilities related to leaked credentials to help organizations detect policy attachment failures and analyze user privileges that could be exploited by threat actors.


KILT

KILT Community Update: End of Delegator Staking Rewards

The community has just voted to end KILT Delegator rewards now rather than later — marking a strategic move from incentives toward sustainable growth.The delegator rewards were originally planned to last two years after Golive, but the community extended them for another year. Now, the community has decided to end this additional phase ahead of schedule, signaling a shift in priorities: lower infl

The community has just voted to end KILT Delegator rewards now rather than later — marking a strategic move from incentives toward sustainable growth.The delegator rewards were originally planned to last two years after Golive, but the community extended them for another year. Now, the community has decided to end this additional phase ahead of schedule, signaling a shift in priorities: lower inflation over higher rewards.

What’s Changed?

Delegator Rewards Ended: Effective immediately, Delegator rewards are set to 0%, lowering inflation significantly. However, Delegators can continue staking and play a role in KILT’s governance.

Collator Rewards Remain: Collators keep the KILT network operational, and their rewards will continue to support network reliability.

Why the Change?

Delegator rewards were initially intended to incentivize participation and stake on the most reliable collators, but they were intended to eventually phase out. It was a critical step in KILT’s growth, and the role delegators have played has been invaluable in maintaining the reliability of the KILT network. Now, with a strong foundation in place, the community is ready to move forward by reducing inflation and focusing on KILT’s expanding utility.

With bonding curves, KILT is entering a new era of utility, making the delegator reward incentives no longer necessary.

What’s Next?

The KILT community’s decision showcases the power of decentralized governance in action, bringing about meaningful change that serves both current and future participants. Your input continues to be crucial as KILT rolls out new features and community-driven initiatives.

Gratitude to the KILT Community

A heartfelt thank you to all KILT Delegators, Collators, and community members for your unwavering support, valuable input, and dedication. Together, we’re crafting a stronger and more sustainable future for KILT. We invite you to join us in this exciting new phase as we continue to innovate, collaborate, and grow — powered by the community, for the community.

For more information on the proposal and to join the conversation, please visit: KILT Governance Proposal: https://kilt.polkassembly.network/referendum/45?tab=onChainInfo

About KILT Protocol

KILT is an identity blockchain for generating decentralized identifiers (DIDs) and verifiable credentials, enabling secure, practical identity solutions for enterprises and consumers. KILT brings the traditional process of trust in real-world credentials (passport, driver’s license) to the digital world while keeping data private and in possession of its owner.

KILT Community Update: End of Delegator Staking Rewards was originally published in kilt-protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Lockstep

Australians becoming familiar with verifiable credentials

The Reserve Bank of Australia (RBA) recently released the 2024 Payments System Board Annual Report. It shows that Australian consumers are rapidly becoming familiar with verifiable credentials in the form of smart phone digital wallets. “The RBA continued to monitor the growth in mobile wallet card transactions… Payments using mobile wallets reached 39 per cent... The post Australians becoming f

The Reserve Bank of Australia (RBA) recently released the 2024 Payments System Board Annual Report. It shows that Australian consumers are rapidly becoming familiar with verifiable credentials in the form of smart phone digital wallets.

“The RBA continued to monitor the growth in mobile wallet card transactions… Payments using mobile wallets reached 39 per cent of card transactions in the June quarter 2024”.  Graph 2.1 from the report is reproduced above.

Whether it’s tap-to-pay at a merchant terminal or click-to-pay in a mobile app, most consumers are becoming comfortable with this digital experience. Thus, they are ready to click-to-present any important IDs in exactly the same way.

Consumers should be able to prove any important facts about themselves with the same security, speed and ease of use as they present their payment cards. So as Lockstep submitted in our submission on Australia’s 2030 cybersecurity strategy, the single most impactful thing that governments could do to make citizens safe online is simply give them the option of carrying their driver licences, Medicare cards, health IDs and social security numbers in standard digital wallets.

The post Australians becoming familiar with verifiable credentials appeared first on Lockstep.


IDnow

5 takeaways from the ‘Sign of the times: The digital signature revolution’ webinar.

Industry experts gathered to discuss the past, present and future of digital signatures and their ever-increasing role in financial services.   From Sumerian tablets to digital documents, signatures have been used for thousands of years as symbols of trust, proof of identity and indicators of agreement.   Nowadays, digital signatures do all the above and offer enhanced […]
Industry experts gathered to discuss the past, present and future of digital signatures and their ever-increasing role in financial services.  

From Sumerian tablets to digital documents, signatures have been used for thousands of years as symbols of trust, proof of identity and indicators of agreement.  

Nowadays, digital signatures do all the above and offer enhanced security and efficiency. Little wonder then that the European digital signature market is predicted to be 7x times larger by 2030.  

To explore how digital signature solutions are changing the way we do business, we organized the ‘Sign of the times: The digital signature revolution’ webinar

Moderated by Ellie Burns, Head of Product & Customer Marketing at IDnow, participants included Uwe Pfizenmaier, Director Product Management VideoIdent/eSign at IDnow, Magali Biron, VP Business Development eSign at Nitro and Julian Groetzbach, NPL & Business Development Manager at TF Bank.  

Now available on-demand, the hour-long webinar covers a variety of topics, from the legal and regulatory landscape of digital signatures to how they can be used to boost conversions. 

Missed the webinar? Here’s our five key takeaways! 

1. Fraud is propelling the adoption of digital signatures.  

There are three different types of digital signatures: Simple Electronic Signatures (SES), Advanced Electronic Signatures (AES) and Qualified Electronic Signatures (QES). As the least regulated, SES is mostly used for internal purposes, while AES and QES are regulated by the European-wide eIDAS standard.  

Almost three quarters of European organizations still use a mixture of paper and electronic documents but increasing numbers of companies are making the digital switch. Up until last year, TF Bank was still using wet signatures to conclude contracts. 

Recent increases in the adoption of digital signatures have been largely driven by the need to onboard more customers, increase conversions and reduce fraud. However, Uwe warned that: 

The benefits of digital signatures, including enhanced trust, speed and security, alongside cost savings and a more sustainable method of concluding contracts, are only made possible through a robust identity verification process.

Uwe Pfizenmaier, Director Product Management VideoIdent/eSign at IDnow

Despite the obvious benefits, some customers are still hesitant to use digital signatures, which is why it’s important for businesses to reassure users that they’re safe and secure and offer improved convenience. IDnow’s InstantSign, for example, only requires users to onboard once. 

Acceptance of new products and technologies can also depend on the maturity of the market, with countries including Germany still hesitant about adopting certain technology

2. The role of regulation in driving adoption. 

Regulations can act as a major accelerator for digital signature adoption. Despite different national laws dictating specific requirements, which can make it difficult to scale and expand, regulations like Electronic Identification, Authentication and Trust Services (eIDAS) can serve to simplify the process.  

Indeed, although compliance requirements may vary across countries, along with accepted types of identification documents, being able to offer as consistent a user experience is vital.  

Contrary to popular belief, the goals of regulators, compliance bodies and businesses are more aligned than people may think. As such, it’s important to embrace solutions that address the needs and challenges of both sides. Integrating a digital signature QES solution can be a great example of this as it not only complies with regulations but also leads to better conversions.  

3. Importance of trust in the digital economy. 

In the ‘offline’ world, customs like shaking hands and looking one another in the eye go some way to establishing trust. 

In financial services, trust is essential, regardless of whether online or off. If a financial institution loses the trust of its customers, it will have a knock-on effect on its reputation and bottom line. Companies must also trust their solution providers so they, in turn, can trust their customers and vice versa. 

Trust is built by being open and offering a good user experience and accessibility, so people want to adopt the solution. That is the foundation of a successful trust process.

Uwe Pfizenmaier, Director Product Management VideoIdent/eSign at IDnow

As digital signatures deal with the exchange of personal data between businesses and users, companies need to treat the data with utmost care, while providing customers with the support they need. 

Expert guide to digital signatures. Learn more about: The different types of digital signatures Benefits of implementing a digital signature solution How IDnow can help unlock valuable business opportunities Download now 4. Impact of eIDAS 2.0 regulations.  

eIDAS 2.0 will address weaknesses in the first iteration of eIDAS, offering better data protection and a more harmonized proof of identity. It will also likely usher in a new era of safer and more secure digital signatures. 

The more forward-thinking financial services players should see new regulations, technology (e.g. artificial intelligence) and solutions (digital wallets) as opportunities to future-proof processes and ultimately increase conversions.  

5. Future of digital signatures. 

To thrive in an ever-changing market, businesses must be quick to adapt to new developments and integrate new technologies to optimize products and processes. 

Implementing technologies like biometrics, machine learning, artificial intelligence and predictive analytics will improve trust and bolster security. 

In markets such as Germany,  eID will play a major role in the future of identification and digital signing, which will assist in the advancement of digital fingerprints. A huge shift is expected for digital identity as more countries prepare for the delivery of a digital ID. Therefore, it is important for companies to partner with solution providers like IDnow to onboard users and facilitate the conclusion of contracts in a user-friendly way.  

However, as always, as companies experiment with technology and launch new processes like biometric checks and AI, they need to be mindful of new and even more inventive fraud attacks. 

Learn more about digital identities in our blog “5 reasons why digital identities will revolutionize business in 2025 and beyond.” 

To learn more about IDnow Trust Services AB and how it will revolutionize the digital signature market by offering greater legal certainty and higher security for electronic transactions, read our interview with Chief Executive Officer of IDnow Trust Services AB, Johannes Leser and Registration Officer of IDnow Trust Services AB, Uwe Pfizenmaier. 

Sign of the times: The digital signature revolution Watch the one-hour webinar now to learn more about the legal and regulatory landscape of digital signatures and how they can be used to boost conversions. Watch now on-demand

By

Kristen Walter
Jr. Content Marketing Manager
Connect with Kristen on LinkedIn


KuppingerCole

Analyst's View: Identity and Access Governance

by Nitish Deshpande Identity and Access Governance concerns the access mechanisms and their relationships across IT systems. It is instrumental in monitoring and mitigating access-related risks. These risks most commonly include information theft and identity fraud through unauthorized changes and/or subversion of IT systems to facilitate illegal actions. Over recent years, security incidents have

by Nitish Deshpande

Identity and Access Governance concerns the access mechanisms and their relationships across IT systems. It is instrumental in monitoring and mitigating access-related risks. These risks most commonly include information theft and identity fraud through unauthorized changes and/or subversion of IT systems to facilitate illegal actions. Over recent years, security incidents have originated from poorly managed identities and proved the need to address these issues across all industry verticals.

Thales Group

Thales Marks National Engineering Day with Showcase of COREF 4.0 Technologies

Thales Marks National Engineering Day with Showcase of COREF 4.0 Technologies Language English chloe.penny Wed, 11/13/2024 - 09:51 In celebration of National Engineering Day, Thales has collaborated with ITN Business on their “Engineering: Today, Tomorrow & Beyond,” programme. This initiative features the cutting-edge work being carried out by Thales
Thales Marks National Engineering Day with Showcase of COREF 4.0 Technologies Language English chloe.penny Wed, 11/13/2024 - 09:51

In celebration of National Engineering Day, Thales has collaborated with ITN Business on their “Engineering: Today, Tomorrow & Beyond,” programme. This initiative features the cutting-edge work being carried out by Thales at our Connected Reconfigurable Factory (COREF), highlighting our efforts to revolutionize the UK manufacturing landscape through Industry 4.0 technologies.

COREF not only improves engineering processes for a more efficient and resilient future, but it also highlights the vital role of people - especially our apprentices and graduates, who represent the future of engineering.

Driving Industry 4.0 Forward

At the heart of COREF’s mission is the development of innovative solutions to accelerate the adoption of Industry 4.0 technologies in high-complexity, low-volume production environments. By integrating smart tools, AI, and robotics, COREF empowers industries such as defence, aerospace, and space to maintain competitive advantage through improved productivity, adaptability and resilience. We are not just adopting new technologies—we are embedding them into manufacturing processes in ways that work alongside human expertise, ensuring that technological advancement enhances, rather than replaces, our people.

Our approach ensures that all production lines benefit from flexible, scalable solutions that are tailored to their specific needs. This is particularly crucial for SMEs, who often face challenges in accessing advanced automation tools. COREF’s open-access facilities remove barriers to innovation by providing smaller businesses with the resources to integrate Industry 4.0 technologies, enabling them to thrive in a rapidly evolving landscape.

Secure Connectivity and Trusted Decision-Making

One of the most critical aspects of Industry 4.0 is the secure flow of data and communication between connected systems. COREF’s unique ability to facilitate trusted decision-making through secure, bi-directional data exchange ensures that organisations can leverage real-time insights into their operations without compromising security. Using Thales’ high-speed encryptors, we ensure that all data transferred between our operational technology (OT) and information technology (IT) layers is secure, creating a trusted environment where decision-makers can act with confidence.

Trusted decision-making is essential for optimising production, and Thales is proud to be at the forefront of ensuring that these advanced systems operate securely. Whether it’s predicting maintenance needs or optimising workflow, our secure, encrypted data flows provide real-time visibility and control over manufacturing processes, helping organisations avoid potential vulnerabilities.

Nurturing Talent for the Future of Engineering

COREF is not just about technology—it’s also about people. Our focus on developing early career talent through apprenticeships and graduate programmes is a key part of our mission. COREF offers young engineers the opportunity to work with novel technologies and contribute to transformative engineering projects, preparing them to meet the challenges of tomorrow.

We understand that the future of engineering lies in diversity—of thought, background, and expertise. COREF fosters an inclusive environment where engineers from various disciplines collaborate to find creative solutions. This diversity is essential to driving innovation, and we believe that by nurturing talent today, we can shape the future of engineering for the better.

Engineering the Future: A Digital Transformation

As we look ahead, the transformation of engineering practices through digital approaches such as digital twins, digital prototypes, and advanced modelling will be central to the future of our work at COREF. Our approach aligns closely with the broader narrative of the UK’s engineering transformation, and we will be sharing further insights at key industry events, including the DE&S Digital Engineering Conference next February.

COREF is a vital part of Thales's expanding innovation and technology ecosystem in the UK, having established Demonstrator Cells in all four nations. Our initiatives have also led to the launch of several PhD research projects, enriching our community and driving knowledge across our network of suppliers and partners.

Thales’ commitment to the future of engineering is evident in our ongoing efforts to build a secure, resilient, and innovative manufacturing ecosystem. By driving the digital transformation of engineering and ensuring secure, trusted decision-making, COREF is playing a crucial role in shaping the future of UK manufacturing.

Watch our COREF Film on the  "Engineering: Today, Tomorrow & Beyond" ITN Business Hub. 

 

/sites/default/files/database/assets/images/2022-11/coref-Banner.png 13 Nov 2024 United Kingdom In celebration of National Engineering Day, Thales has collaborated with ITN Business on their “Engineering: Today, Tomorrow & Beyond,” programme. This initiative features the cutting-edge work being carried out by Thales at our Connected Reconfigurable Factory (COREF),. Type News Hide from search engines Off

Eastern Airlines Technic and Thales extend MRO cooperation

Eastern Airlines Technic and Thales extend MRO cooperation prezly Wed, 11/13/2024 - 08:00 Eastern Airlines Technic (EASTEC) and Thales renewed their Maintenance Partnership Agreement, strengthening their collaboration to provide maintenance, repair, and overhaul (MRO) services. This contract extension follows a successful collaboration between the two companies since 2018. This
Eastern Airlines Technic and Thales extend MRO cooperation prezly Wed, 11/13/2024 - 08:00 Eastern Airlines Technic (EASTEC) and Thales renewed their Maintenance Partnership Agreement, strengthening their collaboration to provide maintenance, repair, and overhaul (MRO) services. This contract extension follows a successful collaboration between the two companies since 2018. This represents another important milestone in the decade-long relationship between EASTEC and Thales. Thales will continue to support EASTEC and to enhance its maintenance capabilities. Maintenance and repair services for the next five years will be provided by Thales for its avionics equipment on China Eastern Airlines’ A320, A330 and B737 fleet.

On November 12th, 2024, EASTEC and Thales renewed their Partnership Agreement at a signing ceremony held at the 15th Airshow in Zhuhai China. Maintenance and repair services will be provided by Thales through 2029 for its avionics equipment installed on China Eastern Airlines’ A320, A330, and B737 fleet. This contract extension follows a successful collaboration between the two companies that started in 2018. This is another important milestone in the decade-long relationship between the two companies and a demonstration of the trust in Thales’s premium service quality and commitment to serve the growing aviation market in China.

Services will be delivered by both Thales Aerospace Beijing Co., Ltd. - Thales local maintenance center in China, and Thales Aviation Global Services (AGS) center in Singapore. Together, they will provide comprehensive support for China Eastern.

Since its first selection of Thales avionics equipment in 2014, China Eastern has equipped 270 A320 aircraft with Thales Flight Management Systems (FMS), Low Range Radio Altimeters (LRRA) and T3CAS integrated surveillance solution. In 2018, China Eastern and Thales also signed a Strategic Maintenance Cooperation Agreement, solidifying their comprehensive partnership.

Thales is at the forefront of innovation and ready to meet the evolving demands of the aviation industry.

Thomas Got, Vice President of Aviation Global Services, Thales Avionics said, "We are proud to strengthen our collaboration with EASTEC, reinforcing our position as a trusted partner in the region. At Thales, we leverage our global expertise to deliver high-quality premium solutions to EASTEC and our local customers and partners.”

/sites/default/files/prezly/images/Design%20sans%20titre%20%2829%29.png Documents [Prezly] Eastern Airlines Technic and Thales extend MRO cooperation.pdf Contacts Cédric Leurquin 13 Nov 2024 Type Press release Structure Aerospace China On November 12th, 2024, EASTEC and Thales renewed their Partnership Agreement at a signing ceremony held at the 15th Airshow in Zhuhai China. Maintenance and repair services will be provided by Thales through 2029 for its avionics equipment installed on China Eastern Airlines’ A320, A330, and B737 fleet. This contract extension follows a successful collaboration between the two companies that started in 2018. This is another important milestone in the decade-long relationship between the two companies and a demonstration of the trust in Thales’s premium service quality and commitment to serve the growing aviation market in China. prezly_704597_thumbnail.jpg Hide from search engines Off Prezly ID 704597 Prezly UUID dad6cecc-6630-4c76-80bc-1c2b7fd4409c Prezly url https://thales-group.prezly.com/eastern-airlines-technic-and-thales-extend-mro-cooperation Wed, 11/13/2024 - 09:00 Don’t overwrite with Prezly data Off

Tuesday, 12. November 2024

KuppingerCole

Building Application Resilience Amidst Regulatory Shifts

In today’s fast-changing regulatory landscape, businesses must not only meet compliance standards but also ensure their applications are resilient against cyber threats. As regulations tighten and the risk environment evolves, organizations face growing pressure to safeguard their applications while staying compliant. The need to balance security with legal requirements has never been more critica

In today’s fast-changing regulatory landscape, businesses must not only meet compliance standards but also ensure their applications are resilient against cyber threats. As regulations tighten and the risk environment evolves, organizations face growing pressure to safeguard their applications while staying compliant. The need to balance security with legal requirements has never been more critical for IT professionals.

Modern technology plays a pivotal role in addressing these challenges. From AI-driven threat detection to advanced encryption techniques, innovative solutions can enhance both security and compliance. By leveraging these tools, businesses can create resilient applications that not only meet regulatory demands but also protect critical data from emerging threats.

Osman Celik, Research Analyst at KuppingerCole, will discuss the evolving regulatory compliance landscape, particularly focusing on the finance and public sectors. He will provide insights into recent developments in PCI-DSS, the EU AI Act, and other critical frameworks. Additionally, Osman will explore industry-specific best practices to help IT professionals navigate this complex environment.

Prakash Sinha, Senior Director & Technology Evangelist at Radware, will highlight actionable strategies to build resilience into your applications. He will discuss the practical implementation of advanced security measures, share case studies of successful organizations, and outline key steps to fortify applications against the growing landscape of cyber threats—all while maintaining compliance with regulatory standards.




SC Media - Identity and Access

Millions of records from MOVEit hack released on dark web

Reportedly 2.8 million Amazon records alone were exposed.

Reportedly 2.8 million Amazon records alone were exposed.


Anonym

Can an Existing Digital Identity Wallet Leverage a Hardware Security Module to Meet New EU Standards?

Anonyome Labs will co-present a paper with Australia’s Queensland University of Technology (QUT) at the 8th Symposium on Distributed Ledger Technology in Brisbane, Australia from November 28–29, 2024. The paper, by Dr Paul Ashley, Ellen Schofield and George Mulhearn from Anonyome Labs, and Dr Gowri Ramachandran from QUT, considers how new European standards for the […] The post Can an Existing D

Anonyome Labs will co-present a paper with Australia’s Queensland University of Technology (QUT) at the 8th Symposium on Distributed Ledger Technology in Brisbane, Australia from November 28–29, 2024.

The paper, by Dr Paul Ashley, Ellen Schofield and George Mulhearn from Anonyome Labs, and Dr Gowri Ramachandran from QUT, considers how new European standards for the EU Digital Identity Wallet mandate support for a hardware security module (HSM) which can perform important cryptographic operations for very strong security and privacy protection for a user.

The paper outlines how an existing digital identity wallet can be enhanced to leverage an HSM, examining both inbuilt and external implementations, and presents a compatible matrix by analyzing the existing credential standards and different HSM cryptographic capabilities.

Watch this short video about the new European digital identity wallet.

The paper concludes that supporting the EU DI Wallet technical Architecture and Reference Framework (ARF)—common standards and technical specifications and common guidelines and best practices for a Digital Identity Framework—is feasible and practical for mobile digital identity wallet applications, but tradeoffs will occur in algorithmic compatibility, user experience, and performance.

We will publish the full paper after the symposium.

Anonyome Labs is a sponsor of the 8th Symposium on Distributed Ledger Technology. See the symposium program for more information.

Distributed ledger technology is an emerging technology, which provides the way to store and manage information in a distributed fashion. It enables the creation of decentralized crypto-currencies, smart contracts, eGovernance, supply chain management, eVoting and so on, over a network of computer systems without any human intervention.

Unprecedented reliability and security over other cryptographic schemes has expanded the application domains of blockchain including financial services, real estate, stock exchange, identity management, supply chain, and Internet of Things.

The symposium is a forum for researchers, business leaders and policy makers in this area to carefully analyze current systems or propose new solutions creating a scientific background for a solid development of innovative distributed ledger technology applications.


Explore Anonyome Labs’ digital identity wallet and reusable credentials solutions.

You might also like:

Aries VCX: Another Proof Point for Anonyome’s Commitment to Decentralized Identity  6 Facts About Digital Identities from One of the World’s Most-Streamed Cybersecurity Podcasts Gartner Confirms Anonyome Labs’ Solutions Offer Competitive Edge

The post Can an Existing Digital Identity Wallet Leverage a Hardware Security Module to Meet New EU Standards? appeared first on Anonyome Labs.


SC Media - Identity and Access

Identify Security Training: How important is it? - Eric Belardo - CSP #200


Indicio

Introducing Indicio Proven Digital Farming — a data management solution that frees farmers to do what they do best, farm

The post Introducing Indicio Proven Digital Farming — a data management solution that frees farmers to do what they do best, farm appeared first on Indicio.
A powerful, portable, privacy-preserving way to share and reuse authenticated data using Verifiable Credentials that saves farmers time, money, and tedium, while connecting stakeholders and unlocking value across the agriculture sector

SEATTLE, Nov. 6, 2024: With the launch of Indicio Proven® Digital Farming, authenticated data can now be shared instantly and reused endlessly across the agriculture value chain — suppliers, government agencies, financial services, and vendors —all while maintaining the farmer’s ownership of their data. 

Farming is data-intensive work, with multiple data sources, and regulatory, and market requirements. Each hour spent on data management takes farmers away from farming — with a measurable economic cost. To meet the challenges of data management in agriculture, Indicio developed a flexible and scalable ecosystem solution using Verifiable Credentials and decentralized identity. 

With a Verifiable Credential, a farmer can hold and manage authoritative, certified farm data from their phone and share it seamlessly with other stakeholders in the agricultural value chain all while maintaining data privacy and protection.

It’s an easy-to-implement solution that ensures farmers fully own their data. It eliminates the need for this data to be stored by third parties in order to be authenticated. Thanks to cryptography, the data shared from a credential cannot be tampered with — and the credential origin is always known. 

This means that data can be reused over and over again with the absolute certainty that those who need to see it can verify it as authentic. It gives farmers the power to be their own data platforms, while radically simplifying their data management burden. 

Benefits Farms and farmers hold and own their own data — not third parties. Capture authenticated data once in tamper-proof records that can be shared from a phone.  Consent to share data is built into privacy-by-design tech. Connect stakeholders across the agricultural value chain through seamless data sharing and authentication Simplify regulatory compliance Accelerate access to international markets Proven, award-winning success in New Zealand  Award winning

Indicio’s Digital Farming solution was first developed for Trust Alliance New Zealand (TANZ) , a nonprofit farming consortium. 

“Being able to quickly share data about their goods or emissions to these key relying parties provided a huge benefit to the farmers, saving them time, creating better connections between them and their customers, and reducing the amount of effort they have to spend filling out the same forms multiple times,” said Sharon Lyon-Mabbet Project Manager at TANZ. 

TANZ’s implementation has won a prestigious Constellation Research SuperNova Award for Digital Safety, Governance, Privacy, and Cybersecurity. This is the second time an Indicio customer has won a Constellation Award.

Learn more about the project here.

A simple solution to an annoying and costly problem

“Verifiable Credentials are the perfect data management tool for a sector that relies on connecting multiple data sources with multiple parties for multiple purposes,” said Heather Dahl, CEO of Indicio. “Farmers don’t want to spend hours and hours on data management, sending the same information to multiple agencies, suppliers, and vendors. And now they don’t have to. With Indicio Proven Digital Farming, we have a capture once, reuse often technology that gives farmers full control and ownership over their data. It’s a way to turn data from being an obstacle to being an opportunity to unlock value, because now it’s easy to share authenticated data in a frictionless way with those who need to use it.”

The farmer as their own digital platform

Decentralized identity and Verifiable Credentials allow farmers to hold and share all kinds of tamper-proof data that can be instantly authenticated by relying parties:

Farm borders Farm ownership Methane emissions Fertilizer application (soil nitrogen levels) Pesticide & herbicide usage Nutrient run-off Water management Implementation of food safety practices Records for contaminant testing Traceability information

Verifying software is simple to use and can be downloaded to a mobile device for instant in-the-field authentication.

What you get with Indicio Proven Digital Farming

We provide a complete solution that contains everything needed to get an entire data sharing ecosystem up and running fast, including digital wallet, mobile SDK, issuing, holding, and verifying software, hosting, support, constant updates, and even certified training. We can handle any customization for specialized use cases, and all our technology is built to meet current and emerging global decentralized identity standards, so you can be confident that your solution will work anywhere.

Indicio is the market-leader in decentralized identity and Verifiable Credential technology and has developed “government grade” digital identity and data sharing solutions for airlines, borders, banking and finance, health, and supply chains. 

Learn more about the solution at https://indicio.tech/digital-farming/, or contact our team to discuss ideas you have for a specifc project.

####

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post Introducing Indicio Proven Digital Farming — a data management solution that frees farmers to do what they do best, farm appeared first on Indicio.


Spruce Systems

Why We Build Digital Infrastructure in Rust

Memory-safe programming offers a safer, more secure future.

If you’re alive in 2024, you’re probably used to hearing a lot about cybercrime. Large hacks, such as thefts of customers’ personal information, seem nearly constant – and they’re only projected to accelerate in coming years.

Recent advances in software development tools, however, offer hope. In February, the White House Office of the National Cyber Director issued a memo encouraging the wider adoption of what are known as “memory-safe” programming languages. That shift could mitigate up to 70% of hacks, preventing attacks that are currently causing a shockingly large amount of economic damage.

SpruceID has been an early adopter of memory-safe programming since its founding in 2020 as part of our commitment to high standards of security. Just about all of our tools are built using the memory-safe language Rust. Read on to find out more about memory-safe programming, Rust – and why all software builders should be taking similar steps into a more secure future.

Death By a Thousand Memory Leaks

Hacks based on flawed memory management are a large part of the massive economic and social harm caused by hacking – what the Council on Foreign Relations has described as a “death by a thousand cuts.”  USAID estimated $8 trillion in economic damage from cybercrime worldwide in 2023. One analysis estimated that cyberattacks will cost the U.S. economy alone more than $350 billion this year. That’s more than 20 times what the U.S. federal government spends on feeding school kids.

Poor memory management is a common weakness in older but still widely-used programming languages like C and C++, and according to research by Google, memory is the root cause of roughly 70% of all system-level hacks. Very broadly, a program can be exploited when it loses track of a chunk of the short-term memory (RAM) that programs run on. Attackers can use uncontrolled or badly indexed memory to alter the intended behavior of a program. Spectre and Meltdown vulnerabilities, which exploit memory to inject malicious code, are still a threat years after their discovery.

Wider use of memory-safe programming languages is a system-wide way to address the ceaseless torrent of hacks. The White House notice (summarized here by Security Intelligence) follows a 2022 bulletin by the National Security Agency also encouraging the move towards memory-safe programming languages. It’s unusual for agencies like the NSA to issue specific software development advice, making this guidance particularly notable. 

The unusual push is justified because memory-safe programming presents the possibility of what might sound like a fantasy: dramatically reducing the prevalence of destructive hacks by attacking one of their root causes.

The Language of Choice for Secure and Reliable Solutions

SpruceID is committed to staying at the forefront of security standards. Our tools handle highly sensitive data and are trusted to verify its validity, often in high-security settings. With security as a top priority, we carefully design, develop, and deploy our solutions to meet these demands. That's why we build our secure applications in Rust, a programming language known for its memory safety and robustness. Its adoption by leading organizations highlights its suitability for building resilient, high-security systems, and we are glad to be part of this movement.

Rust is becoming increasingly recognized for its excellent design and is by far the most widely used memory-safe programming language. It has been integrated into critical components of Google, Linux, Windows, and Nvidia products. The February White House report can’t be seen as picking favorites, so it’s not explicit, but reading between the lines, it’s fairly clear that Rust is meant to be front and center for those mulling a path toward improved memory safety.

One of the more remarkable advantages of Rust, as Google reports, is that building new components with Rust provides security advantages even without re-writing or heavily modifying legacy codebases. That makes the transition far more efficient: Google began pushing Android development to memory-safe languages in 2019, and memory vulnerabilities have declined from more than 70% to just 24% of Android vulnerabilities in the years since - without overhauling existing code.

In November of last year, Microsoft announced that it was investing $10 million in improving developer tooling for Rust and integrating Rust into Windows and Azure environments. Microsoft also made a large contribution to the Rust Foundation, where SpruceID is also a member, and Microsoft engineers have said the Rust is mature enough to integrate into core components such as the OS kernel. Linux, the operating system that runs many industrial server systems, is also actively integrating Rust into its core architecture, shifting away from what devs consider “inherent weaknesses” in older languages.

While security is the headline, Rust does bring many other benefits. It leads to better performance in many circumstances, even in comparison to other modern languages like Go. Programmers also broadly consider it a pleasure to use: Rust is far and away the most “loved” programming language, according to a survey by Stack Overflow. Programmer Gregory Szorc has explained the appeal by describing Rust as a perfect mix of innovative ideas and user-friendliness. So an added benefit of Rust, and one we’ve definitely experienced at SpruceID, is that it makes it easier to attract and keep top coding talent.

One Important Piece of the Security Puzzle

While memory-safe programming languages like Rust are essential in reducing vulnerabilities, they’re only one component of a robust security program. At SpruceID, we recognize that creating secure systems goes beyond selecting a single language - it’s about designing, testing, and maintaining a multi-layered strategy for every stage of development and deployment.

Rust helps us uphold these high standards, but it’s integrated into a wider approach that includes rigorous protocols, continuous monitoring, and regular updates. Each of these components reinforces the security, reliability, and privacy that our users expect.

Rust is The Future

At SpruceID, we’re focused on building better identity systems, which are poised to become a more secure and more private system for managing our digital lives. Building on a secure foundation, and aiding the broader transition to memory-safe programming, is a natural extension of  SpruceID’s core mission.

This isn’t just about strong principles and good vibes, though - these recent government directives on memory safety are a strong signal that it’s the right strategic move, too. The White House sets guidelines for Federal contractors and procurement, so memory safety could become a requirement for those applications. Builders interested in working with the government should all be considering transitioning to memory-safe tools, and Rust is clearly at the top of that list.

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


Indicio

SITA, Idemia partner to build digital identity ecosystem for travel

PhocusWire The post SITA, Idemia partner to build digital identity ecosystem for travel appeared first on Indicio.

liminal (was OWI)

Market & Buyer’s Guide for Age Assurance 2024

The post Market & Buyer’s Guide for Age Assurance 2024 appeared first on Liminal.co.

Elliptic

Crypto regulatory affairs: Following the US elections, the industry anticipates regulatory clarity and move to pro-crypto stance

A sweep of the Presidency, Sentate, and House of Representatives by the Republican Party on the November 5 US elections has the US crypto industry confident that regulatory clarity is on the way, and that a period of aggressive regulatory enforcement will be ending.

A sweep of the Presidency, Sentate, and House of Representatives by the Republican Party on the November 5 US elections has the US crypto industry confident that regulatory clarity is on the way, and that a period of aggressive regulatory enforcement will be ending.


SC Media - Identity and Access

The rise of phishing-resistant MFA and what it means for a passwordless future

Slowly but surely, phishing-resistant forms of multi-factor authentication are catching on. Here's how to join the movement, and how it can lead to a fully passwordless environment.

Slowly but surely, phishing-resistant forms of multi-factor authentication are catching on. Here's how to join the movement, and how it can lead to a fully passwordless environment.


Datarella

Supply Chain Tracking in Action

This article is the fifth in a series of posts about how our probabilistic 360° supply chain tracking product, Track & Trust, works. We described how the system works at […] The post Supply Chain Tracking in Action appeared first on DATARELLA.

This article is the fifth in a series of posts about how our probabilistic 360° supply chain tracking product, Track & Trust, works. We described how the system works at a component level in our previous articles. Now, we dive into the challenging environment where our pilot operations have been executed. We selected Lebanon, one of the most difficult operational locations in the world, for our first pilot shipments to really prove the mettle of the system.

Aid Pioneers – an Ideal Pilot Partner

We have been working with our humanitarian partner Aid Pioneers for many months to prepare for these shipments. Aid Pioneers connects available resources from donors directly to recipient organizations. Through close collaboration with on the ground initiatives and the private sector, Aid Pioneers connects resources from donors directly with local organizations to foster sustainable, community-led change. They do this in places that need them most, making them a highly innovative humanitarian agency. They take an end-to-end approach to the supply chain, which we believe suits Track & Trust perfectly. Aid Pioneers needs to extend tracking of supplies beyond what typical supply chain tracking products can accomplish. We are helping them achieve this.

Supply Chain Tracking Challenges

Aid Pioneers‘ logistics environment provides a perfect showcase for what Track & Trust can do. When Aid Pioneers ships a container full of medical supplies or solar power generation equipment to a Lebanese clinic or school, they hire a freight forwarder to pick up the goods. The freight forwarder then organizes the delivery to a local port via semi-truck. After that, a freight forwarder loads the container onto a ship. The ship travels to a port of entry in Lebanon, and we track its progress using a typical tracking link. However, once the container clears customs, we take over. We actively track it and pick up where traditional systems stop working.

At this point we encounter tricky conditions. Aid Pioneers local lebanese partner Al-Manhaj breaks down containers into multiple pallets or depalletizes them. They do this before final delivery. After that they deliver goods to one location while others go to other locations at different times. To keep track of what was delivered when, we use probabilistic 360° supply chain tracking. We also developed strategies to deal with power and connectivity outages.

Outwitting Outages

These outages always happen at the wrong time so it’s important that the system is able to handle them. We do this with built in backup batteries and a battery management system. On top of that, the communications landscape is very challenging.  Sometimes there’s 4G connectivity and at other times there’s outages. Our mesh nodes can operate no matter, though, by caching incoming data locally. The nodes just wait until the data can be posted or handed off to other mesh nodes. This approach multiplies the effectiveness of our communications assets.  On top of that, we positioned one of our satellite uplinks at a local school. As a result, every event is (at the minimum) recorded and transmitted asynchronously – even when conditions are at their worst.

These logistics challenges are not unique to Aid Pioneers’ operations. However, they are particularly pronounced in the places where they work. We believe that if our system works there and brings value to freight forwarders and humanitarian organizations, it will work anywhere. As a result of this testing we’re confident in the capabilities of Track & Trust.

In our next post we’ll describe exactly how the our pilot operations went – and what the big value drivers are.

<<Previous Post

Next Post>>

The post Supply Chain Tracking in Action appeared first on DATARELLA.


Innopay

Exclusive roundtable on Europe's data economy with Mariane ter Veen and Yvo Volman

Exclusive roundtable on Europe's data economy with Mariane ter Veen and Yvo Volman from 27 Nov 2024 till 27 Nov 2024 Trudy Zomer 12 November 2024 - 13:25 Brussels, Belgium INNOPAY is pleased to announce that Mariane ter Veen, our Direct
Exclusive roundtable on Europe's data economy with Mariane ter Veen and Yvo Volman from 27 Nov 2024 till 27 Nov 2024 Trudy Zomer 12 November 2024 - 13:25 Brussels, Belgium

INNOPAY is pleased to announce that Mariane ter Veen, our Director of Data Sharing, has been invited to participate in an exclusive roundtable discussion with Yvo Volman, Director for Data at DG CNECT. Taking place in Brussels on 27th of November, this invitation-only event provides a unique opportunity for selected participants to gain insights into the European Commission’s priorities and upcoming initiatives to foster Europe’s data economy.

During the discussion, Yvo Volman will share the Commission’s perspective on key topics, including the European data union strategy and the implementation of the Data Act, aimed at building sectoral data spaces across Europe.

This exclusive event reflects INNOPAY’s ongoing involvement in shaping the future of data sharing and digital policy in Europe.


KuppingerCole

cidaas Auth Manager

by Alejandro Leal In today's digital landscape, it is critical for every organization to have an agile and modern Identity and Access Management (IAM) solution. By providing complete visibility into who accesses what, when and how, modern IAM platforms enable organizations to better manage and mitigate risk. cidaas offers an IAM platform that is based on a microservices architecture with a core se

by Alejandro Leal

In today's digital landscape, it is critical for every organization to have an agile and modern Identity and Access Management (IAM) solution. By providing complete visibility into who accesses what, when and how, modern IAM platforms enable organizations to better manage and mitigate risk. cidaas offers an IAM platform that is based on a microservices architecture with a core set of services designed to address both customer and employee requirements. This architecture facilitates rapid updates and scalability, while ensuring the integration of user management and authentication processes.

Dock

The DOCK token migration to CHEQ is now live!

As you already know, Dock and cheqd are merging their tokens and blockchains to form a powerful alliance in the decentralized identity space. This partnership unites the blockchain capabilities of two industry leaders to accelerate the global adoption of decentralized identity and verifiable credentials, providing individuals and organizations worldwide with

As you already know, Dock and cheqd are merging their tokens and blockchains to form a powerful alliance in the decentralized identity space.

This partnership unites the blockchain capabilities of two industry leaders to accelerate the global adoption of decentralized identity and verifiable credentials, providing individuals and organizations worldwide with secure, trusted digital identities.

As part of this evolution, the Dock network will migrate its functionality and all tokens to the cheqd blockchain. This transition will enable Dock to leverage cheqd’s advanced infrastructure, delivering even greater value to both ecosystems.

During the migration, existing $DOCK tokens will be swapped for $CHEQ tokens at a conversion rate of 18.5178 $DOCK to 1 $CHEQ, ensuring a seamless and straightforward transition for all token holders.

How to migrate your DOCK tokens to CHEQ Before you start Ensure you have a compatible wallet for $CHEQ. If you don’t already have one, follow these instructions to set one up. We recommend using the Leap wallet, which has a browser extension and a mobile wallet on both Android and iOS, and can be easily connected during the migration process. There is another alternative which is to use the Keplr wallet. Update your wallet softwares to their latest versions. This ensures compatibility with the new system and reduces the likelihood of encountering bugs or issues during migration. Note that the migration must be done through the Dock browser-based wallet. If you use the Dock Wallet App or Nova Wallet, you can easily add your account to the Dock browser wallet by following these steps. If your $DOCK tokens are currently on an exchange, you’ll need to withdraw them to a Dock wallet to complete the token migration, as it can only be done from an address you own. Follow our guide on creating a Dock wallet account to get started. We are actively working with exchanges to allow them to handle the migration on behalf of users, and we’ll publish a list of participating exchanges as soon as it’s available. Read and understand the migration's terms and conditions. Make sure you are familiar with the terms before proceeding. You can review them here. The migration service will only be available until March 15th, 2025. After this date, the migration will no longer be supported. Please ensure you complete the process before the deadline.

Migrating your DOCK tokens Access the migration page
Click here to visit the migration page and begin the process. Connect Leap or manually enter your cheqd account
Connecting the Leap wallet ensures that the tokens are sent to the cheqd account you control. If you do not see your Leap accounts in the dropdown, follow these steps to set up your Leap wallet for cheqd. Select your Dock account
If your account isn’t already added, follow these instructions to add it. Accept the Terms & Conditions
Once you've reviewed the T&Cs, click Submit to confirm your migration request.

The full balance of your Dock account will be migrated in a single transaction, partial amounts are not permitted. Once submitted, your $DOCK tokens will be burnt, and the converted $CHEQ tokens will be sent to your designated cheqd wallet using the swap ratio of 18.5178 $DOCK to 1 $CHEQ. 

The migration process typically takes 1-2 business days, after which your $CHEQ tokens will be available in your cheqd wallet. Please bear in mind that during the holiday season (mid-Dec 2024 to early Jan 2025) it might take a bit longer.

Please follow these steps carefully, and if you have any questions, feel free to reach out to our team at support@dock.io.

Note: If you need any transaction reports from the Dock blockchain for tax purposes, make sure to download them from our Subscan blockchain explorer before March 15, 2025.

The Future of Decentralized ID

Dock and cheqd will continue as independent companies serving distinct market sectors in unique ways. By merging their tokens, expertise, and strategic focus, they will drive their shared vision forward with unstoppable momentum.

This token merger is not just a change; it's a monumental leap forward. By merging the $DOCK token with $CHEQ, we are unlocking unprecedented opportunities for our community, positioning you at the cutting edge of decentralized identity innovation.

The future of decentralized digital identity is bright, and with your $CHEQ tokens, you'll be part of a dynamic, growing ecosystem that is set to lead the industry. 

Dock and cheqd will shape a world where secure, verifiable credentials are the norm, and your involvement is key to making this vision a reality. The journey ahead is filled with potential, and we are thrilled to have you with us as we pave the way for the next era of digital identity.


auth0

Authentication and Authorization Enhancements in .NET 9.0

With .NET 9.0, some interesting authentication and authorization features have been added to the platform. Let’s take an overview of them.
With .NET 9.0, some interesting authentication and authorization features have been added to the platform. Let’s take an overview of them.

Monday, 11. November 2024

1Kosmos BlockID

Vlog: How Can Remote Caller Verification Protect Your Organization From Social Engineering?

Mike Engle: Hi, everybody. My name is Mike Engle, co-founder and head of strategy here at 1Kosmos. I’m joined today by Jens Hinrichsen. Say hello, Jens. Jens Hinrichsen: Hello, everybody. Mike Engle: Jens is our head of sales here at 1Kosmos, spends a lot of time in the trenches. And today we’re here to talk … Continued The post Vlog: How Can Remote Caller Verification Protect Your Organization

Mike Engle:
Hi, everybody. My name is Mike Engle, co-founder and head of strategy here at 1Kosmos. I’m joined today by Jens Hinrichsen. Say hello, Jens.

Jens Hinrichsen:
Hello, everybody.

Mike Engle:
Jens is our head of sales here at 1Kosmos, spends a lot of time in the trenches. And today we’re here to talk about remote caller verification. We have an acronym for that, RCV. But Jens, would you mind giving your quick pitch on what RCV is for the folks out there?

Jens Hinrichsen:
Yeah, I would love to. And I think certainly also, Mike, with all the conversations that we’re both fortunate to have with a variety of organizations globally, please chime in with some of your own perspective as well. But I think remote caller verification, whether it is IT service desk for employees, contractors, other third parties that are interacting with an organization and have access to the inner sanctum, if you will, of an organization versus, say, contact center or call center. Where for years the industry has been working on solutions to mitigate fraud from a customer or outside facing standpoint, this is really about these emerging threat actor groups. Not even so much emerging, but Scattered Spider certainly has taken the cake recently in terms of being in the press most from MGM, Caesers, a host of other organizations where they have as a group socially engineered their way through the IT service desk of an organization.

So in the case of 1Kosmos, hi, I’m Mike Engle, I’m a co-founder. Service desk agent’s like, “Oh, my gosh, I got a co-founder on the call.” And if it’s not Mike and it’s a threat actor group, very charming, you name it, they can socially engineer their way in, get the credential reset, and then have Mike’s access to the company. So it is a big area of threat. It’s a big area of inefficiency also that organizations are trying to get better shored up. Mike, any other thoughts you have on that?

Mike Engle:
Yeah, so a lot of friends in the industry, I talk to them about this and they don’t have the right tools typically. So they’re using old, tired methods or no methods. They just turn it off because they can’t trust it. And an example would be secrets. What’s your employee ID? What was your date of hire? What was the amount of your last payroll deposits? Which I wouldn’t know that. So sometimes those are too hard and don’t work or they’re too easy to guess and anybody can use them. So social engineering has been around forever, but they’ve gotten really good at finding the information, the legacy ways that people have been using over time. What are some of the ways that they’re using now to get into help desks?

Jens Hinrichsen:
Well, it’s interesting, too. I think back to the point we made earlier from a fraud standpoint, I mean, there’s been social engineering going on for ages. Whatever that chain looks like, phishing, malware, getting information, and then pretending to be a customer of an organization, malicious actors are looking for economic gain and other impact for a variety of reasons. But where you can have big impact is when you’re able to infiltrate an organization. It’s one thing to steal $50,000 from a customer of an organization. It’s a big deal. You want to mitigate that, but as far as being able to get into the inner bowels of an organization’s IT stack moving laterally, whatever the case is, that is a huge area of focus these days.

So a lot of the, call it, the social engineering talent, the charms, I mean, Mike, you and I have even through different circles heard some of these calls and they’re … Wow, if I’m the service desk agent, yeah, I’m believing this person. You don’t have an ID for what reason or you don’t know this for whatever reason? Sure, of course. So I think it’s really been the same playbook focused on this avenue now. And again, it is really, really easy for these sophisticated threat actors to sound very believable, have core information that’s needed that would get a service desk agent to say, “Mr. Engle, co-founder of 1Kosmos, that’s fine that you don’t have this and this, but I’m going to issue a new credential to you right away. I want to make sure you’re happy.”

Mike Engle:
Right, and they may create a sense of urgency. I’m a doctor, I got a patient here at a table and I can’t unlock my stethoscope, whatever it is. So yeah, that’s a common tactic as well that we’ve seen them use. And then once they get that initial credential, they’re typically 50% of the way of getting into the core network and things go downhill from there. And so yeah, the traditional KBA, which you would think stands for knowledge based authentication.

Jens Hinrichsen:
Knowledge based authentication. Right.

Mike Engle:
We actually refer to it as known by anybody, KBA. So it really is close to useless. And whenever I opened a new financial services account and they pop up those five questions, what was the type of car you had when you were five years old or whatever, I run for the hills if I can. So what can we do about it? How does 1Kosmos, for example, mitigate this threat?

Jens Hinrichsen:
Yeah. And even, Mike, before we go there, and I think one of the examples, what’s one of the KBA examples you’ve used before? It’s like your grandmother’s shoe size when she was nine or something. Well, whatever the iteration is, before we even get into solution, I think some of the really interesting parts that we’ve gotten more intimate with is even the other ways that organizations are trying to address this. So KBA, sure, that’s one. Known by anybody, as you said. OTP. Hey, I’m going to push you an OTP. Well, we still don’t know it’s Mike. And then we’re also seeing a lot of organizations, not even necessarily just at the highest level of privilege, but even more broadly where it’s an escalation to the manager. And you do the math on that in terms of just sheer productivity loss and in some cases you might not still be actually verifying it’s that genuine user.

So there’s these kind of clunky ways and tools that we as an industry have been trying to address this. And so to your question, Mike, it’s like, well, gosh, what is a way that an organization can do this where it’s effectively automated? So somebody is still calling into the service desk, but you’re removing the onus of verification from the service desk agent because the reality is service desk agents are being asked to do so many things already and they’re always do it in this amount of time, get it faster, faster. So you don’t want to forsake quality, but how do you have a very easy process for both agent and user, whether genuine or a malicious actor, to undertake that then gives the credence that, yes, this is actually Mr. Engle calling in? And so there are a few ways to do it. One that really gives, I’ll say, the minimum viable baseline would be a one-time identity verification or identity proofing event where I call into the service desk and I’m pretending to be you.

And the service desk agent says, “Okay, Mr. Engle, I’m going to send you a link either to your phone, to your email address.” There are a variety of things that you have to take into consideration obviously in terms of companies that might not have employees be able to have phones or are they company owned, et cetera. Those are all things that you see and we navigate accordingly, but the very simple process of opening up a link, scanning the front and back of a driver’s license, a passport, some other government issued document, and then doing a matching selfie against the image that’s on that document. And what we can do with very high assurance is give a thumbs up or thumbs down. And all we would do is simply say the agent, “Yep, this is Mr. Engle,” or in my case, pretending to be you, “No, this is not.” And so that’s a really simple initial way to do it. The really exciting part, and this is what permeates the next generation, which is actually here now and gaining steam, is the user control.

That reusable identity of, hey, once I have verified myself, once I essentially have an identity wallet that I can then present wherever it’s needed that proves that I am like Engle and I don’t have to go back through the whole process of scanning something, selfie, et cetera. So the elegance is there. You get high assurance, quick and easy, reduces call center times. And then again, you’re removing that, again, onus on the service desk agent of having to be the one. And there are other companies, too, Mike, where it’s, “Hey, can you hold your ID up to the camera?” It’s hard enough to tell that they’re real when you’re holding them, much less over a camera.

Mike Engle:
Yeah. And when I hold my license up to a camera, now what’s the other person doing with that information? First of all, they can’t verify it. It’s too hard. You can’t see the little security features and then now I’ve just showed you my driver’s license number. That’s something you don’t want floating out there on a video call. So yeah, the privacy preserving aspects are really key. If you can assure the help desk and your remote callers, your remote employees, or customers that it’s safe, then they’ll trust it and feel good about using it as well. That’s a great point. Yeah, so I think we’ve about done it. I guess one last thing is how hard is it to implement a tool like identity-based biometric verification for a service desk?

Jens Hinrichsen:
Yeah. What’s the usual answer? Well, we could have had it in yesterday, so you got a couple of flavors. And I think the great thing for us as an industry is you can literally start as fast as you can start with, call it, a touchless integration where you’re simply calling out to an API. That link that we talked about earlier that gets sent to the user, that’s essentially a service. It’s a hosted service and you’re not having to replumb or do anything on day one within your organization. You can address the threat, make it a simpler process literally within a couple of weeks. And then the subsequent steps that I know we’ve observed with our customers is there are things that you can do to tighten some of the workflows, whether it’s ServiceNow or whatever the service desk system or backend might be.

But then that next step, and it can come pretty quickly, is the organization’s adoption and use of that reusable identity. And it’s a pretty powerful thing when we think about especially at the point of, say, onboarding. Whether it’s say HR onboarding, contract, or third-party onboarding, you’re doing that verification once. The user now owns it. You made a great point about privacy preservation. I mean, that’s what we’re all in the space for, right? It’s one thing to have a point in time, but you have to make sure it’s privacy preserving. But then also, let’s make it efficient for everybody. Do the verification once and then all you’re doing is you’re essentially authenticating into systems or doing high-risk transactions or whatever the case is after that.

Mike Engle:
Right, right. And you can’t implement something like this without uttering the words ROI, right?

Jens Hinrichsen:
Yeah.

Mike Engle:
You have the obvious security benefits, stop bad guys, but the user experience is actually better. And then an organization can have 100,000 calls into a help desk a year. It’s an average of 30% to 50% are password reset or identity related, so why not remove that and save those calls from even coming in? You can automate this, you can do it in a self-service password reset manner as well, SSPR. So yeah, a lot of reasons to do it.

Jens Hinrichsen:
Yeah. Well, no, and you’re right. And it’s fun to build these business cases alongside organizations because it’s not just a security risk mitigation. There are very direct, like you said, Mike, very direct savings, overall operating efficiencies. Even to the point where as an organization lifts its security posture, they’re getting better policy. Their cyber insurance policies are coming down or at least not going up as quickly as they might go, depending on what most of us are feeling in the industry. So that’s a great point, that this is a really a multi-pronged business case. And I think we’ve observed 10, 20, 30X return on an investment in even just the first year.

Mike Engle:
Yeah. Yeah, it’s a no brainer. So hopefully we’ll get the phone calls before the bad guys get in and not after, but either way …

Jens Hinrichsen:
Mike’s personal number is…

Mike Engle:
That’s right. Well, cool. Thanks so much for joining. It’s been fun chatting with you about this. Hopefully somebody out there will see it and will spark some ideas to make a difference in the world of cybersecurity.

Jens Hinrichsen:
Brilliant. Great chat, Mike.

Mike Engle:
Thank you.

The post Vlog: How Can Remote Caller Verification Protect Your Organization From Social Engineering? appeared first on 1Kosmos.


KuppingerCole

Synthetic Data

by Anne Bailey The term synthetic data stands for artificially generated data that closely replicate the statistical properties, patterns, and characteristics of the real data. This replication mimics reality without including actual information about individuals or entities. As such, it becomes a secure and privacy preserving alternative to using raw, sensitive, or proprietary data. This data is

by Anne Bailey

The term synthetic data stands for artificially generated data that closely replicate the statistical properties, patterns, and characteristics of the real data. This replication mimics reality without including actual information about individuals or entities. As such, it becomes a secure and privacy preserving alternative to using raw, sensitive, or proprietary data. This data is used in training, testing, validation, and analytics. Artificial intelligence (AI) uses advanced algorithms to generate these datasets, preserving the statistical integrity of original data sources without exposing private information.

Unified Endpoint Management: HP

by John Tolbert In the IT landscape, managing a diverse array of devices such as smartphones, tablets, laptops, and IoT devices presents significant challenges. Device discovery can be difficult due to the distributed and dispersed nature of work, especially in the post-pandemic Work From Anywhere (WFA) and Bring Your Own Device (BYOD) paradigms. After devices are discovered, IT teams face the tas

by John Tolbert

In the IT landscape, managing a diverse array of devices such as smartphones, tablets, laptops, and IoT devices presents significant challenges. Device discovery can be difficult due to the distributed and dispersed nature of work, especially in the post-pandemic Work From Anywhere (WFA) and Bring Your Own Device (BYOD) paradigms. After devices are discovered, IT teams face the task of efficiently managing and configuring these devices, ensuring that each one complies with organizational security policies. The following are some of the common challenges that organizations face with regard to managing computing endpoints.

Security Service Edge: Broadcom

by Mike Small Digital transformation and cloud-delivered services have led to a tectonic shift in how applications and users are distributed. Protecting sensitive resources of the increasingly distributed enterprise with a large mobile workforce has become a challenge that siloed security tools are not able to address effectively. In addition to the growing number of potential threat vectors, the

by Mike Small

Digital transformation and cloud-delivered services have led to a tectonic shift in how applications and users are distributed. Protecting sensitive resources of the increasingly distributed enterprise with a large mobile workforce has become a challenge that siloed security tools are not able to address effectively. In addition to the growing number of potential threat vectors, the very scope of corporate cybersecurity has grown immensely in recent years.

Digital Divide: The US-China Struggle for Cyberspace

by Alejandro Leal The end of history? In the early 1990s, as the Cold War receded into history, political theorists proclaimed the "end of history," suggesting a future dominated by liberal democratic values under a unipolar international system led by the United States. This period coincided with the rapid expansion of the Internet, which was envisioned as a tool to promote global connectivity

by Alejandro Leal

The end of history?

In the early 1990s, as the Cold War receded into history, political theorists proclaimed the "end of history," suggesting a future dominated by liberal democratic values under a unipolar international system led by the United States. This period coincided with the rapid expansion of the Internet, which was envisioned as a tool to promote global connectivity.

However, the ensuing decades have seen a shift toward a multipolar world, with rising powers such as China and regional blocs asserting their influence. This shift has fragmented both cyberspace and the global economy, with nations prioritizing national security over global interests, resulting in a cyber landscape characterized by sovereignty and divergent norms.

Cyberspace, often perceived as an abstract concept, is actually grounded in a robust architecture that encompasses both physical and software infrastructure. This includes undersea cables, terrestrial networks, satellites, and data centers, alongside essential protocols like TCP/IP that facilitate data transfer.

This infrastructure is central to modern geopolitics, emphasizing that control over data and management of information flows are now as strategically important as territorial dominance was in previous centuries. Modern geopolitical strategies are increasingly focused on establishing, defending, and expanding digital domains as much as physical ones.

Two tigers cannot share the same mountain

This can be illustrated, for example, by contrasting international commitments such as the "Declaration for the Future of the Internet," signed by over 60 governments, including the U.S. and EU, which promotes a vision of an open and secure Internet. In contrast, China's State Council's "Jointly Build a Community with a Shared Future in Cyberspace" reflects an alternative vision emphasizing digital sovereignty and state control, indicating a global divide in cyberspace governance and Internet freedom.

The strategic competition between the U.S. and China also extends into the uncharted depths of the ocean, centering on the undersea fiber-optic cables that carry more than 95% of intercontinental Internet traffic. These cables are essential for everything from consumer transactions to government communications. Recently, both major American tech companies and Chinese state-owned enterprises have tightened their control over these assets.

The submarine cable industry is a niche but critical sector that relies on a limited global fleet capable of laying and maintaining these cables. However, this lack of expertise sometimes forces Western governments to rely on foreign powers such as China for essential repairs, creating potential security vulnerabilities. Notably, China has strategically emphasized its role in the “maintenance” aspect, seeking to position itself as an indispensable player in the ongoing operation and upkeep of this vital infrastructure.

At the heart of this competition are semiconductor microchips, which are central to both civilian and military technologies. China's strategy to dominate this essential industry underlines its broader economic and political ambitions to supplant the U.S. as hegemon in the Asia-Pacific region and establish its own “sphere of influence”. This strategic competition is demonstrated by the tensions over Taiwan, a key center of semiconductor manufacturing, where Beijing and Washington's interests are sharply at odds.

Strategic Competition in the Digital Age

Global cyber conflicts and the economic impacts associated with them are reshaping international relations in profound ways. As nations vie for control over critical internet infrastructure and data flows, cyberspace has become a new domain of strategic competition, paralleling traditional conflicts over maritime and land resources. The stakes are high, as control over AI technologies and the cyber realm carries significant implications for national security, military advantage, and technological edge.

Unfortunately, a fragmented international system and divided cyberspace hinder the global cooperation needed to tackle pressing challenges such as climate change and the governance of AI. When the world's nations are divided, their collective power to address these universal issues is significantly weakened. As another Chinese proverb wisely states: "A single tree does not make a forest.”

Join us in December in Frankfurt at our cyberevolution conference, where we will continue to discuss the cyber threat landscape and its economic impact.

See some of our other articles and reports:

Software Supply Chain Security Cyber Risks from China: How Contract Negotiations Can Mitigate IT Risks Beyond Boundaries: The Geopolitics of Cyberspace

Security Orchestration, Automation and Response (SOAR)

by Alejandro Leal As the number and sophistication of cyberattacks have increased over the years, it has become clear that traditional cybersecurity methods and tools are increasingly inadequate to address these evolving threats. Large organizations, whether part of critical infrastructure or not, must be able to detect and respond to incidents by monitoring security and analyzing real-time events

by Alejandro Leal

As the number and sophistication of cyberattacks have increased over the years, it has become clear that traditional cybersecurity methods and tools are increasingly inadequate to address these evolving threats. Large organizations, whether part of critical infrastructure or not, must be able to detect and respond to incidents by monitoring security and analyzing real-time events. To stay secure and compliant, organizations need to actively seek out new ways to assess and respond to cyber threats while providing Security Operations Center (SOC) analysts with the right tools.

Sunday, 10. November 2024

KuppingerCole

Digital Sovereignty or Global Connectivity? The US-China Cyberspace Divide

In this episode, host Matthias welcomes Research Analyst Alejandro Leal to explore the evolving landscape of cyber warfare. Drawing from William Gibson's sci-fi classic "Neuromancer," they discuss how the digital battleground is now a critical arena for nations, corporations, and cyber criminals. Their conversation covers the economic consequences of cyber attacks, the strategic importance of un

In this episode, host Matthias welcomes Research Analyst Alejandro Leal to explore the evolving landscape of cyber warfare. Drawing from William Gibson's sci-fi classic "Neuromancer," they discuss how the digital battleground is now a critical arena for nations, corporations, and cyber criminals.

Their conversation covers the economic consequences of cyber attacks, the strategic importance of undersea fiber optic cables, and the role of semiconductor manufacturing in global tensions. Learn how different national perspectives on cyberspace shape security measures and why international cooperation is essential in addressing challenges like AI governance and climate change.

Join Matthias and Alejandro as they dissect the current state of cyber warfare and its implications for global security. Don't forget to leave your comments and questions below!

Alejandro's Blog: https://www.kuppingercole.com/events/cyberevolution2024/blog/us-china-struggle-for-cyberspace



Friday, 08. November 2024

Extrimian

A Leap Forward in Decentralized Digital Identity

The Buenos Aires City Government has embarked on a transformative journey by integrating QuarkID into its miBA platform, showcasing a significant leap in decentralized digital identity. This initiative not only enhances privacy and security for citizens but also marks a pivotal moment in digital governance. The Role of Key Players Extrimian Extrimian is a key […] The post A Leap Forward in Decen

The Buenos Aires City Government has embarked on a transformative journey by integrating QuarkID into its miBA platform, showcasing a significant leap in decentralized digital identity. This initiative not only enhances privacy and security for citizens but also marks a pivotal moment in digital governance.

The Role of Key Players Extrimian

Extrimian is a key participant, plus a technical implementer of QuarkID protocol and used its IDConnect product to facilitate the integration of QuarkID into miBA. This effort underscores Extrimian’s commitment to advancing decentralized identity solutions.

Government of Buenos Aires

The Buenos Aires City Government (GCBA) app miBA has been crucial in adopting and integrating digital solutions that improve city management and citizen services, enhancing both efficiency and transparency.

The goal of this initiative is to give 3.6 million residents of Buenos Aires greater control over their personal information.

zkSync

Powered by zkSync, QuarkID leverages advanced zero-knowledge proofs to ensure secure and private blockchain transactions, significantly enhancing data protection on the miBA platform.

IT Rock

IT Rock has played an instrumental role in seamlessly integrating QuarkID with miBA, ensuring that the technological deployment aligns with the city’s needs for digital identity solutions.

QuarkID

As a protocol integrated into miBA, QuarkID stands at the forefront of this initiative, enabling the secure and efficient verification of digital identities across Buenos Aires.

What is the purpose and use of miBA?

miBA is a digital platform by the Government of Buenos Aires that centralizes access to various city services using advanced technologies like blockchain. This platform allows citizens to securely manage documents and services, enhancing privacy and efficiency. The integration of decentralized identity solutions like QuarkID into miBA exemplifies a significant advancement in providing secure and user-focused digital governance.

Expanding Digital Identity in Buenos Aires

This project by the City of Buenos Aires marks a global milestone as the first city to implement decentralized identity technology on a large scale, issuing verifiable credentials to its entire population. This initiative not only advances the digitalization of public services but also sets a new standard in protecting citizens’ data privacy and security.

The integration has expanded to include a variety of 32 verifiable credentials types, such as Birth and Marriage certificates, Student IDs,Gross Income Tax Certificates, Salary Receipts GCBA, Employee Credential GCBA, and more. This expansion not only simplifies the management of personal documents but also enhances the interoperability of digital credentials across various services.

Documentation and Process Integration

This integration process, managed in collaboration with IT Rock and Extrimian, exemplifies a streamlined approach to adopting IDConnect. This process is pivotal for cities and businesses looking to implement similar decentralized identity solutions.

Source: https://buenosaires.gob.ar/innovacionytransformaciondigital/miba-con-tecnologia-quarkid-la-ciudad-de-buenos-aires-incorporo Voices from the Ground

Read some quotes from IT Rock and GCBA representatives that provide personal insights into the project’s impact and their experiences, emphasizing the collaborative effort required to modernize public services.

Extrimian’s CEO, Guillermo Villanueva, shares his thoughts on IDConnect’s role in this integration:

“With Extrimian IDConnect, we are laying the foundations for a more secure, private and self-managed exchange of information, and building a world with more trust and less friction. 

Our product facilitated the process of miBA-QuarkID integration by IT Rock thanks to the simplicity of our product and the support of Extrimian’s team.”

From the Secretary of Innovation and Digital Transformation of the Government of the City of Buenos Aires side, Juan Pablo Migliavacca – Director General de Ciudadanía Digital en Secretaría de Innovación y Transformación Digital del GCBA, shares that:

“The implementation of IDConnect was critical to quickly, securely, and efficiently connect our miBA system with the QuarkID protocol. Thanks to this integration, and continued work with the Extrimian team, we simplified and improved citizens’ access to their data in a reliable, transparent, and secure way, in a completely digital, frictionless environment.”

Conclusion

The integration of QuarkID into Buenos Aires’ miBA platform is more than a technological upgrade; It is a strategic enhancement to the city’s digital infrastructure, setting a benchmark for other cities worldwide.

For further details on the decentralized digital identity movement and Extrimian’s solutions, visit our Use Cases page.

This blog post aims to provide a comprehensive overview of the transformative integration of QuarkID with miBA, illustrating the synergy between technology providers and governmental vision in advancing digital identity solutions. 

For more detailed insights and developments, visit the Extrimian website and the Extrimian Academy.

Download miBA

IOS Android

Download QuarkID

IOS Andriod

The post A Leap Forward in Decentralized Digital Identity first appeared on Extrimian.


HYPR

HYPR Featured Partner for YubiKey Bio Multi-Protocol Edition

Today Yubico announced the general availability of its YubiKey Bio - Multi-protocol Edition, which supports biometric authentication for FIDO and Smart Card/PIV protocols. Like other YubiKey Bio Series, the new multi-protocol keys incorporate a fingerprint sensor, enabling secure, convenient biometric and PIN-based passwordless login across devices and platforms. The multi-protocol keys

Today Yubico announced the general availability of its YubiKey Bio - Multi-protocol Edition, which supports biometric authentication for FIDO and Smart Card/PIV protocols. Like other YubiKey Bio Series, the new multi-protocol keys incorporate a fingerprint sensor, enabling secure, convenient biometric and PIN-based passwordless login across devices and platforms. The multi-protocol keys, however, offer additional flexibility for enterprises, especially when combined with the HYPR platform.

"By combining Yubico's YubiKey Bio Series with HYPR's advanced solutions, organizations can effortlessly transition to a fully passwordless environment," said Jeff Wallace, SVP Product at Yubico. “This partnership not only enhances biometric authentication but also streamlines the process for desktop logins and strengthens phishing-resistant capabilities. With features like single-step YubiKey fingerprint setup for both web and workstation authentication, centralized credential management, and flexible authentication methods, we empower users to manage their security with confidence, even in sensitive environments.”

HYPR Plus YubiKey Bio — Multi-protocol Edition 

HYPR has worked closely with Yubico for years to bring flexible, phishing-resistant security to businesses around the world. The YubiKey Bio – Multi-protocol Edition is another step towards fully phishing-resistant, passwordless adoption and HYPR is proud to be Yubico’s sole featured partner.

Accelerate Passwordless Strategy

Available in both USB-A and USB-C form factors, the new multi-protocol YubiKeys support modern FIDO and Smart Card/PIV protocols, providing phishing-resistant login for desktops and web applications, across both legacy on-premises and modern cloud environments. Our joint solution makes it easy to provision and roll out the multi-protocol security keys, bringing enterprises the most versatile secure, hardware-based and software-based passwordless biometric authentication on the market.

Make Teams More Productive

The new biokeys provide near-instant login using fast, secure biometrics instead of PINs. Seamless desktop to web access removes extra authentication steps without compromising security.

Simplify YubiKey Onboarding and Management

HYPR provides choice to admins and flexibility for end users. Admins may enable users to start with a new YubiKey out of the box free of any pre-enrolled certificates. Users can enroll their YubiKeys in a single step click-through with the HYPR Passwordless client.



Users can also easily manage their security keys for lifecycle events such as unpairing, changing the fingerprint, resetting and more through the HYPR application. Administrators can also centrally manage user passwordless access through the HYPR Control Center.


YubiKey Login Flow With HYPR


Product Highlights Desktop login on Microsoft Windows using Smart Card/PIV with fingerprint Web authentication with FIDO2/WebAuthn and FIDO U2F using the same biometrics as desktop login Single-step enrollment for workstation and web using the HYPR application and no pre-enrolled certificates required Users can centrally manage credentials through the HYPR application Flexibility of authentication methods for various use cases, including account recovery and shared workstations

The YubiKey Bio - Multi-protocol edition is available globally through YubiKey as a Service. Learn more about the HYPR and YubiKey integration.

To see HYPR and the new YubiKey Bio - Multi-protocol Edition in action, schedule a demo.

 


Finicity

Simplify and Speed Up Customization with Mastercard’s New Customize Connect Editor 

Mastercard Open Banking is transforming the way businesses tailor customer experiences with the launch of Customize Connect, a no-code editor that makes customizing Connect experiences faster, simpler, and fully in… The post Simplify and Speed Up Customization with Mastercard’s New Customize Connect Editor  appeared first on Finicity.

Mastercard Open Banking is transforming the way businesses tailor customer experiences with the launch of Customize Connect, a no-code editor that makes customizing Connect experiences faster, simpler, and fully in your control. Available through the Client Hub portal, this powerful new tool allows clients to easily personalize their Connect experiences without needing to rely on Mastercard’s support teams. 

Customize Connect: Empowering Businesses to Optimize Their Customer Journeys 

Customize Connect puts clients in the driver’s seat, offering an intuitive, self-service interface that allows clients to adjust key elements of the Connect experience—whether for testing or production—using just one simple editor. With real-time validation, businesses can rapidly iterate and deploy updates, enhancing the way their customers securely link accounts. 

Now, businesses can manage their Connect experiences independently, from onboarding new experiences to fine-tuning existing ones, all without the need for extensive technical knowledge. It’s all about giving clients the ability to quickly adapt and scale their offerings based on customer needs. 

Key Customization Features  Branding Flexibility: Customize Connect makes it easy to adjust the look and feel of the Connect experience to match your brand identity. Upload logos, match accent colors, and ensure seamless integration with the rest of your user interface for a consistent experience.  Financial Institution Customization: Clients can tailor the financial institutions displayed to end users, ensuring they see the banks they’re most likely to use. With the ability to customize up to 8 FIs, businesses can simplify authentication by presenting the most relevant options.  Streamlined Account Selection: Whether your customers are selecting one or multiple accounts, Customize Connect allows you to refine the experience by controlling which account types are available for selection. This is especially useful in payment-focused experiences, where you may only want to show checking or payment accounts.  Real-Time Testing & Validation: With the ability to make changes on-the-fly, businesses can validate their customizations in real time, reducing the need for lengthy testing periods and ensuring smooth deployment.  Seamless Integration into Your Workflow 

Customize Connect is integrated directly into the Client Hub portal, making it easy to manage all your settings in one place. For more technical users, access it through Mastercard Developers to incorporate it into your existing projects. Whether you’re adjusting a live production experience or testing new options, the process is quick, simple, and completely within your control. 

Learn More 

For a full walkthrough of how to use Customize Connect, visit Mastercard Developers for detailed documentation or watch the quick demo video below to see the tool in action. 

With Customize Connect, Mastercard Open Banking empowers businesses to create better and tailored customer experiences on their terms. 

The post Simplify and Speed Up Customization with Mastercard’s New Customize Connect Editor  appeared first on Finicity.


auth0

What Are OAuth Pushed Authorization Requests (PAR)?

Learn what Pushed Authorization Requests are and when to use them to strengthen the security of your OAuth 2.0 and OpenID Connect-based applications.
Learn what Pushed Authorization Requests are and when to use them to strengthen the security of your OAuth 2.0 and OpenID Connect-based applications.

Datarella

Confidential Computing for Industry 4.0

With the Cosmic-X project nearing its conclusion, it is finally time to lift the curtain on the blockchain solution that Datarella has built over the last two years to enable […] The post Confidential Computing for Industry 4.0 appeared first on DATARELLA.

With the Cosmic-X project nearing its conclusion, it is finally time to lift the curtain on the blockchain solution that Datarella has built over the last two years to enable confidential computing and data sharing in Industry 4.0. In this first entry of a series of technical posts about designing, implementing, and integrating an edge-to-cloud blockchain solution, we discuss the evaluation process for selecting a suitable blockchain platform for Cosmic-X and how that platform operates on a protocol level to provide an open, transparent, and secure infrastructure for industrial use cases.

Evaluating Blockchain Platforms

Today, many different blockchain platforms exist, but their suitability for industrial use cases remains specific or, at times, limited. To achieve the best match between the requirements of Cosmic-X and the possibilities of blockchain technologies, the team conducted an extensive evaluation process. This evaluation compared both private and public blockchain platforms based on security, privacy, scalability, and interoperability.

Current-generation blockchain platforms predominantly perform well in security and scalability, yet privacy and interoperability often fall short. To achieve privacy in industrial scenarios like Cosmic-X, organizations have almost exclusively used private or consortium blockchains such as Hyperledger Fabric in the past. However, these approaches inherently involve high infrastructure costs for the operating parties, as well as centralization and limited interoperability. In contrast, public blockchains offer resilience, cost efficiency, and a degree of interoperability. Though only recently have they started focusing on privacy and data protection. Blockchain protocols with confidential computing capabilities remain relatively new and untested. Nevertheless, when weighing the advantages and disadvantages of the two approaches, a privacy-focused public network emerges as the preferred solution in an industrial context.

For a public network to meet Cosmic-X’s privacy and data protection requirements, it must support the multi-tenancy paradigm. Multi-tenancy enables a single instance of a software application to serve multiple clients while ensuring logical isolation. Different clients share an underlying infrastructure, which optimizes resource use and reduces infrastructure costs. Further, it enhances efficiency in data access, management, and collaborative data sharing.

Through this evaluation, the Cosmos-based Secret Network emerged as the blockchain platform best suited for Cosmic-X. The Secret Network functions as a public blockchain specifically developed for confidential computing. By combining established encryption techniques with trusted execution environments, it provides so-called Secret Contracts. This type of smart contract establishes consensus on computation without disclosing incoming or outgoing data. Integrated access control mechanisms enable third-party access and create an auditable processing chain. Thus, the Secret Network satisfies the need for multi-tenancy capability while retaining all the benefits of a public network.

How the Secret Network Works

The Secret Network leverages Intel Software Guard Extensions (Intel SGX) to create Trusted Execution Environments (TEE) that enable Secret Contracts. These smart contracts, based on the CosmWasm framework, allow for fully private computation of data. Outside a TEE, the transaction payloads and the network’s current state are encrypted at all times. Only the data owner and an authorized third party can decrypt and view data inputs and outputs. A combination of symmetric and asymmetric encryption schemes—ECDH (x25519), HKDF-SHA256, and AES-128-SIV—achieves this end-to-end encryption. Each validator in the network must run an Intel SGX-compatible CPU and instantiate a TEE that follows the network’s rules.

When an encrypted transaction arrives in the shared mempool of the network, a validator forwards it to their TEE, where a shared secret is derived and used to decrypt the transaction. The WASMI runtime then processes the plaintext input. Finally, the validator re-encrypts the updated contract state and broadcasts it to the network through a block proposal. If over two-thirds of the current network voting power agree on the result, the network appends the proposed block to the Secret Network blockchain.

For access control, the Secret Network offers Viewing Keys and Permits. A Viewing Key acts as an encrypted password that grants a third party permanent access to data related to a specific smart contract and private key. A Permit allows a more granular approach, restricting viewing access to specific parts of data for a set period. Consequently, despite its encrypted nature, the network remains fully auditable.

In the next post, we’ll explore how we leverage the Secret Network to secure machine data integrity directly from its point of origin to its consumption by a Machine Learning Model.

The post Confidential Computing for Industry 4.0 appeared first on DATARELLA.


SelfKey

SingularityDAO, SelfKey and Cogito Finance Token-Holders Approve Merger to Form Singularity Finance

SingularityDAO, SelfKey, and Cogito Finance have agreed to form Singularity Finance after the communities approved the merger. SDAO and KEY token-holders voted overwhelmingly in favor of the proposal.

SingularityDAO, SelfKey, and Cogito Finance have agreed to form Singularity Finance after the communities approved the merger. SDAO and KEY token-holders voted overwhelmingly in favor of the proposal.


Dock

The Port of Bridgetown Accelerates Vessel Clearance with Dock’s Verifiable Credential Technology

Zug, Switzerland – 8 November, 2024 – Barbados Port Inc., the state-owned entity that manages the Port of Bridgetown, has integrated Dock's Verifiable Credential technology into their Maritime Single Window, to revolutionize their vessel clearance processes. This cutting-edge solution enables the Port of Bridgetown to expedite vessel clearance

Zug, Switzerland – 8 November, 2024 – Barbados Port Inc., the state-owned entity that manages the Port of Bridgetown, has integrated Dock's Verifiable Credential technology into their Maritime Single Window, to revolutionize their vessel clearance processes. This cutting-edge solution enables the Port of Bridgetown to expedite vessel clearance for both arriving and departing ships, while ensuring the integrity of credentials through tamper-proof, verifiable data. This integration enhances efficiency, security, and trust in the port’s clearance procedures.

Full article: https://www.dock.io/post/port-of-bridgetown-accelerates-vessel-clearance-with-docks-verifiable-credential-technology


Tokeny Solutions

ERC-3643: The Motherboard for Composable Tokenized Assets

The post ERC-3643: The Motherboard for Composable Tokenized Assets appeared first on Tokeny.

Product Focus

ERC-3643: The Motherboard for Composable Tokenized Assets

This content is taken from the monthly Product Focus newsletter in November 2024.

“What token standard does your platform support?” This is a question we hear often. As a regular reader of our newsletter, you might think, “Tokeny? They’re an ERC-3643 platform.” But that’s only part of the story.

Think of ERC-3643 as a Lego motherboard. It’s the fundamental base, the piece that holds everything else all together. The real magic happens when you start adding multiple smart contract blocks. What makes its composability powerful is the ability to reuse existing and proven smart contracts.

Here are a few of the most common “add-on blocks” our clients add to their tokenized assets:

Smart contracts ensure compliance: Compliance contracts make sure that only approved identities can hold tokens. They also set rules for when and how tokens can be transferred, blocking any unauthorized moves. Smart contracts enrich asset onchain data: Asset identity contracts let you add data to assets, like ISIN, LEI, net asset value (NAV), and ESG ratings, making it easy for other platforms, such as distributors, to access this information quickly. Smart contracts enable distribution: Distribution contracts control where the tokens can be distributed. In addition, Delivery vs. Delivery (DvD) contracts can automate buying and selling. If all requirements are met, DvD swaps will happen, without counterparty risks. Smart contracts automate corporate actions: Corporate action contracts handle tasks like paying dividends or coupons, making middle and back office operations faster, smoother, and safer.

ERC-3643 isn’t here to compete with other token standards, it’s designed to work alongside them, offering composability and complementing their functionality. We act as a smart contract factory to ensure the smooth deployment and management of all smart contracts associated with tokens. The future of onchain finance is composable and interoperable, we are passionate about building products to achieve that vision.

Please do not hesitate to contact us if you have any questions regarding this topic.

P.S. What is more exciting is that this week, ERC-3643 was recognized as the official standard in Project Guardian by the Monetary Authority of Singapore (MAS) for ensuring compliance in tokenized debt instruments and funds. Check out more details here.

Joachim Lebrun Head of Blockchain Subscribe Newsletter

This monthly Product Focus newsletter is designed to give you insider knowledge about the development of our products. Fill out the form below to subscribe to the newsletter.

Other Product Focus Blogs ERC-3643: The Motherboard for Composable Tokenized Assets 8 November 2024 How Tokeny’s Platform Empowers Fund Administrators To Act in Onchain Finance 20 September 2024 56% of Fortune 500 Are Onchain: APIs Are Your Key to Staying Ahead 23 August 2024 The Journey to Becoming the Leading Onchain Finance Operating System 19 July 2024 Streamline On-chain Compliance: Configure and Customize Anytime 3 June 2024 Multi-Chain Tokenization Made Simple 3 May 2024 Introducing Leandexer: Simplifying Blockchain Data Interaction 3 April 2024 Breaking Down Barriers: Integrated Wallets for Tokenized Securities 1 March 2024 Tokeny’s 2024 Products: Building the Distribution Rails of the Tokenized Economy 2 February 2024 ERC-3643 Validated As The De Facto Standard For Enterprise-Ready Tokenization 29 December 2023 Tokenize securities with us

Our experts with decades of experience across capital markets will help you to digitize assets on the decentralized infrastructure. 

Contact us

The post ERC-3643: The Motherboard for Composable Tokenized Assets appeared first on Tokeny.


KuppingerCole

Synthetic Data for Security and Privacy

by Anne Bailey This report provides an overview of the Synthetic Data market and a compass to help you find a solution that best meets your needs. It examines solutions that generate datasets that closely replicate the statistical properties, patterns, and characteristics of real and production data. It provides an assessment of the capabilities of these solutions to meet the needs of all organiza

by Anne Bailey

This report provides an overview of the Synthetic Data market and a compass to help you find a solution that best meets your needs. It examines solutions that generate datasets that closely replicate the statistical properties, patterns, and characteristics of real and production data. It provides an assessment of the capabilities of these solutions to meet the needs of all organizations to generate and work with synthetic data.

ShareRing

A revolutionary way to protect personal and corporate data using Google Cloud and ShareRing.

In our daily lives we are regularly asked to provide personal details, and in many instances, we cannot secure a service or product unless we do so. This may involve a simple request to provide proof of identity, or a much more detailed one, perhaps requiring verification. This can be time-consuming, and often a fraught ... Read more The post A revolutionary way to protect personal and corporate

In our daily lives we are regularly asked to provide personal details, and in many instances, we cannot secure a service or product unless we do so. This may involve a simple request to provide proof of identity, or a much more detailed one, perhaps requiring verification. This can be time-consuming, and often a fraught process, and there is always concern for the security of that data. While online privacy concerns are at an all-time high, organizations increasingly store sensitive information digitally, in centralized databases.

Global regulators continue to evolve laws and regulations accompanied by outsized penalties for companies that fail to comply with them. The annual cost of cybersecurity crime to Australia alone is estimated to be in the range of $29 billion to $30 billion. A 2023 KPMG report estimated the total cost at $29 billion per year, with direct costs to businesses accounting for a significant portion. The Australian Cyber Security Centre’s 2022-23 Cyber Threat Report highlighted a 14% increase in the average cost of cybercrime per report compared to the previous year.

Increased regulation and penalties follow the foundation set by other international privacy legislation, such as Europe’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). The maximum penalties for non-compliance have been significantly increased. The penalty for a serious privacy breach up until late 2022 was just $2.22 million. Now, businesses can be charged with the greater of: 

$50 million three times the value of any benefit obtained through the misuse of information 30 percent of the company’s adjusted turnover in the relevant period

These penalties are a compelling reason alone for businesses to improve the way they protect their client’s data. This is just the beginning — businesses need to understand their data obligations and where needed, implement new, compliant processes.

Read on to explore why the combination of Google Cloud Platform, and ShareRing’s digital identity platform, are a revolutionary approach to personal privacy and business protection.

The data security arms race

More-and-more the transfer of our personal details is done digitally, and the risk of someone obtaining those personal details is growing exponentially. We are all aware of the growing risk of identity theft and other exploits that compromise our personal information. A centralized store of identity data is an irresistible high risk data source  to cyber criminals and threat actors.

Digital Identity as a Service

ShareRing Link, is a decentralized public infrastructure (DePIN) solution that leverages Google Cloud for its core infrastructure. ShareRing’s ecosystem is architected to ensure minimal personal data is shared, and importantly is not stored centrally. ShareRing Link is a business system that enables a user to share select personal information from an encrypted Vault to a businesses’ backend system, such as KYC information to a financial institution, or age verification to a licensed merchant, for alcohol sales, via a zero-knowledge proof function.

ShareRing Me, a digital identity app available on Android, uses Blockchain technology to collect and store verified identity data in an immutable, reusable, self sovereign Vault, a “Digital Me”, on the user’s personal smart device. At all times the User controls who they choose to share their data with from their advice. ShareRing Me also gives the user the ability to backup the heavily encrypted Vault file onto their personal Google Cloud, to ensure no data is lost.

Privacy and Data Segregation with Google Cloud

Google Cloud Platform is quickly becoming a leader in business and enterprise cloud computing worldwide. This is, in large part, due to their “security by design, security by default” stance, underpinning a comprehensive and industry-leading approach to data protection.

Data Segregation:

Google Cloud customer data is siloed, which reduces the attack vectors. This is driven by their self imposed objectives to protect customer data and security, as well as the need to adhere to increased regulatory compliance requirements globally.

Google Cloud uses Logical Isolation mechanisms inherent to virtualization technology to create isolated virtual environments for each customer, ensuring that their data and applications are not directly accessible to others.  ShareRing Self-Sovereignty: ShareRing Me uses a decentralized storage model to keep verified and encrypted personal data on the User’s personal smart device. Data Encryption and Access Controls: Encryption: Google Cloud has built-in encryption capabilities, such as Cloud KMS (Key Management Service) to encrypt data at rest and in transit. ShareRing Smart Contracts: ShareRing uses smart contracts to automate and enforce access rules based on predefined conditions, ensuring that only authorized parties can access. Data is secured across multiple nodes in ShareRing blockchain, making it immutable to breach or tampering. IAM and RBAC:  Google Cloud Identity and Access Management (IAM) is used to implement granular access controls and permissions for different users and roles. In addition, Google Cloud  also uses role-based access control (RBAC) and network segmentation to restrict access to customer data based on user permissions and network boundaries. Storage: While Google Cloud  can provide additional storage capacity, it could be used in conjunction with ShareRing to offer redundancy and disaster recovery of the encrypted Vault. Strong technology intersect

ShareRing Founder, Tim Bos, stated – “Google Cloud’s commitment to customer data protection is a significant factor in why we chose to partner with them. Our technologies and philosophies intersect seamlessly. The extensive and industry-leading security controls Google Cloud  provides, and ShareRing’s self sovereign identity solution, together are a much needed evolution in privacy, in a world where personal information is as precious, or moreso, as your other assets”.

Assurance of best practices in a combined identity solution

Both ShareRing and Google Cloud  undergo regular audits and certifications to ensure compliance with various security standards, such as ISO 27001. Digital Identities are also becoming increasingly regulated. ShareRing is certified against the UK’s Digital Identity and Attributes Trust Framework  (DIATF) and is seeking accreditation against similar frameworks in the EU (eIDAS 2)  and Australia Digital ID framework as they come into play.

Business inquiries: 

Ryan Bessemer, ShareRing Global

+61 403 300 442 

ryan@sharering.network 

About ShareRing

ShareRing Global stands as the only digital identity business certified with ISO27001 Information Security Management certification, as well as a DIATF-certified provider in the UK.  Our suite of identity verification technologies transforms online interactions, ensuring they are 

safer, faster and easier. You choose what you share with ShareRing.

 www.sharering.network 

About Google Cloud Platform

Google Cloud Platform (GCP) is a suite of cloud computing services offered by Google that provides a series of modular cloud services including computing, data storagedata analytics, and machine learning, alongside a set of management tools.

cloud.google.com 

The post A revolutionary way to protect personal and corporate data using Google Cloud and ShareRing. appeared first on ShareRing.


IdRamp

SailPoint Account Recovery Using CLEAR Identity Verification

IdRamp has partnered with SailPoint and CLEAR to transform account recovery through advanced Identity Verification (IDV). The post SailPoint Account Recovery Using CLEAR Identity Verification first appeared on Identity Verification Orchestration.

IdRamp has partnered with SailPoint and CLEAR to transform account recovery through advanced Identity Verification (IDV).

The post SailPoint Account Recovery Using CLEAR Identity Verification first appeared on Identity Verification Orchestration.

Thursday, 07. November 2024

KuppingerCole

Overcoming the Challenges of MFA and a Passwordless Future

Securing user identities has become a crucial focus for organizations of all sizes. The evolution from traditional passwords to Multi-Factor Authentication (MFA) and eventually to passwordless solutions introduces various challenges, such as technical obstacles, changing threat landscapes, and resource limitations. Modern technology offers promising solutions to these authentication challenges.

Securing user identities has become a crucial focus for organizations of all sizes. The evolution from traditional passwords to Multi-Factor Authentication (MFA) and eventually to passwordless solutions introduces various challenges, such as technical obstacles, changing threat landscapes, and resource limitations.

Modern technology offers promising solutions to these authentication challenges. Advanced MFA methods, biometrics, and passwordless technologies provide enhanced security and improved user experience. However, successful implementation requires careful planning, integration with existing systems, and a focus on scalability and user adoption.

Alejandro Leal, Research Analyst at KuppingerCole, will introduce the concept of passwordless authentication, explore its benefits and challenges, and share market insights based on the latest research. He will provide valuable perspectives on the current state of authentication technologies and future trends.

Malte Kahrs, Founder and CEO of MTRIX GmbH, will address practical implementation challenges of MFA and passwordless authentication. He will discuss strategies for overcoming technical hurdles, integrating with Microsoft Entra ID, managing hardware distribution, and ensuring a smooth user experience for successful adoption.




Thales Group

Thales and Exail partner to deliver next-generation autonomous underwater vehicle mine detection capabilities for French Navy’s SLAMF Mine Countermeasures programme

Thales and Exail partner to deliver next-generation autonomous underwater vehicle mine detection capabilities for French Navy’s SLAMF Mine Countermeasures programme prezly Thu, 11/07/2024 - 19:14 Villepinte, (France) – November 6, 2024 – Thales and Exail have been selected by the French defence procurement agency (DGA) to deliver eight, plus eight more in option, Autonomous Underwa
Thales and Exail partner to deliver next-generation autonomous underwater vehicle mine detection capabilities for French Navy’s SLAMF Mine Countermeasures programme prezly Thu, 11/07/2024 - 19:14

Villepinte, (France) – November 6, 2024 – Thales and Exail have been selected by the French defence procurement agency (DGA) to deliver eight, plus eight more in option, Autonomous Underwater Vehicles (AUV).This extended version of the A18-M AUV will integrate the SAMDIS 600 sonar for the SLAMF program (the French Armed forces programme, supervised by the DGA, which aims at renewing the French Navy mine warfare capabilities with massive use of unmanned systems).


Thales’s new compact SAMDIS sonar is optimised for the detection of all naval mine threats, down to deeper waters. Featuring a unique advanced multi-view capability, it captures images from multiple angles in a single pass. Integrated with the Mi-MAP sonar data analysis software and Artificial Intelligence-driven algorithms, the SAMDIS 600 achieves exceptionally high detection and classification probabilities, delivering superior performance, which increases operational tempo and efficiency.

Based on an extended version of Exail’s A18-M AUV, with its long-range precision and stealth, and its ability to operate in challenging environments, this AUV is a sea-proven vehicle, designed to embed sonar for the detection and classification of maritime mines. Exail is responsible for developing the AUV’s fully autonomous capabilities and integrating, closely with Thales, the SAMDIS sonar technology.

Integrated within the MMCM toolbox by Thales, the A18-M AUV and the SAMDIS 600 will deliver superior performance for the benefit of the French Navy, ensuring the safety of assets and personnel, and enhancing the efficiency of mine countermeasure (MCM) missions.

“Mine Countermeasures is among the first military fields to fully transition to third-generation MCM capabilities with autonomous, drone-based systems. This is the core mission of the SLAMF programme, and Exail, a specialist in naval drones, is proud to contribute with its extended version of the A18-M AUV”, said Jérome Bendell, CEO of Exail Maritime Business Line. Adding that “This collaboration underscores our commitment to delivering cutting-edge technology that meets the highest standards of performance and safety for modern naval defence.”

“The proliferation of sea mines and their sophistication make data collection and advanced analysis all the more necessary to counter these terrible threats. Today, autonomous mine countermeasures systems offer greater efficiency, while protecting crews from the dangers of minefields. With the new SAMDIS 600 procured in the frame of the SLAMF, the most innovative mine countermeasure programme, Thales is proud to help operators in their decision-making for the benefit of their missions.” said Gwendoline Blandin-Roger, Vice-President Underwater Systems Business Line, Thales.

About Exail

Exail is a leading high-tech industrial company specializing in cutting-edge robotics, maritime, navigation, aerospace and photonics technologies. With a strong entrepreneurial culture, Exail delivers unrivaled performance, reliability and safety to its civil and defense clients operating in severe environments. From the deep sea to outer space, Exail expands their capabilities with a full range of robust in-house manufactured components, products, and systems.

Employing a workforce of 1850 people worldwide, the company benefits from a global footprint and conducts its business in over 80 countries.

Exail was formed by ECA Group and iXblue joining forces in 2022. It is a subsidiary of Exail Technologies, a family-owned company specialized in high-technology.

About Thales

Thales (Euronext Paris: HO) is a global technology leader serving the Defence & Security, Aerospace & Space and Cybersecurity & Digital Identity markets.

The Group develops products and solutions that help make the world safer, greener and more inclusive.

Thales invests close to €4 billion a year in Research & Development, particularly in key areas such as AI, cybersecurity, quantum technologies, cloud technologies and 6G.

Thales has 81,000 employees in 68 countries. In 2023, the Group generated sales of €18.4 billion.

 

/sites/default/files/prezly/images/Generic%20banner%20option%202%20%282%29_8.png Contacts Cédric Leurquin 07 Nov 2024 Type Press release Structure Defence and Security Defence Villepinte, (France) – November 6, 2024 – Thales and Exail have been selected by the French defence procurement agency (DGA) to deliver eight, plus eight more in option, Autonomous Underwater Vehicles (AUV).This extended version of the A18-M AUV will integrate the SAMDIS 600 sonar for the SLAMF program (the French Armed forces programme, supervised by the DGA, which aims at renewing the French Navy mine warfare capabilities with massive use of unmanned systems). prezly_702791_thumbnail.jpg Hide from search engines Off Prezly ID 702791 Prezly UUID ae3a81d3-4edb-4084-9175-eaeaa65704f4 Prezly url https://thales-group.prezly.com/thales-and-exail-partner-to-deliver-next-generation-autonomous-underwater-vehicle-mine-detection-capabilities-for-french-navys-slamf-mine-countermeasures-programme Thu, 11/07/2024 - 20:14 Don’t overwrite with Prezly data Off

KuppingerCole

Rise of the Machines - Why Machine Identity Management Has Become Essential

by Matthias Reinwarth In today’s hybrid and complex IT environments, machine identities are multiplying at an astonishing rate. If managing human identities was once the main concern, that focus has shifted drastically. Today (depending on who you ask, vendors, tech experts, analysts..., the figures might vary), there are approximately 45 to even 100 times more machine identities than human ones,

by Matthias Reinwarth

In today’s hybrid and complex IT environments, machine identities are multiplying at an astonishing rate. If managing human identities was once the main concern, that focus has shifted drastically. Today (depending on who you ask, vendors, tech experts, analysts..., the figures might vary), there are approximately 45 to even 100 times more machine identities than human ones, and each one of these machine identities poses a potential security risk if not properly managed. The rapid growth of cloud, DevOps, and automation has spurred this explosion in machine identities, creating a critical need for robust management strategies to ensure secure authentication, controlled access, and safe interaction across digital environments.

Machines Need Identities Too – But Not Just "Machines"

While we often talk about "machines", this term actually covers a wide range of digital entities. Beyond physical machines, today’s IT landscapes include IoT devices, OT systems, bots, applications, technical accounts, containerized services, and even cloud workloads, each of which demands a unique, securely managed identity. Machine identities enable non-human entities to authenticate, communicate, and interact autonomously, safeguarding sensitive data and critical system resources.

This landscape of digital identities is diverse, each with distinct lifecycles, requirements for secure communication, and authentication needs. For instance, IoT devices like connected cars or smart-home systems need robust authentication mechanisms to communicate safely. Industrial OT devices like SCADA sensors need secure identities for data exchange, while Kubernetes clusters and cloud instances require identities to manage interactions within dynamic, cloud-native environments. The complexity and scope of these digital interactions mean that every identity, no matter how short-lived, needs to be handled with precision and care.

Visibility, Control, and Lifecycle Management – Core Challenges

With such a wide array of machine identities, maintaining visibility and control is paramount. However, many organizations struggle to track and manage these identities effectively. As new short-lived identities proliferate in dynamic environments, they often escape detection, leading to potential vulnerabilities. When these identities are inadequately managed, they can become weak points in security, offering potential access points for cyber threats.

Another challenge is lifecycle management. Machine identities, unlike human ones, often have short lifespans and require frequent updates, renewals, or deactivations. If these lifecycles aren’t managed meticulously, organizations risk having outdated, insecure identities lingering in their systems. This unmanaged sprawl of identities can compromise not only security but also compliance with standards such as GDPR or HIPAA. The implications are clear: lifecycle management must be systematic, automated, and responsive to the high turnover typical of machine identities.

The Risks of Poorly Managed Machine Identities

When machine identities go unmanaged, the repercussions can be severe. Unauthorized access to sensitive systems, privilege escalation through compromised identities, and exposed secrets are just a few of the risks. In the absence of effective monitoring, organizations miss out on the timely detection of security threats, allowing vulnerabilities to go unnoticed. Moreover, hard-coded secrets, if left unprotected, become easy targets for exploitation, leading to potential security breaches.

As machine identities proliferate, so too does the attack surface, leaving organizations more vulnerable to unauthorized access and data leaks. This is particularly problematic in industries where compliance and security are paramount, as mismanaged identities can lead directly to regulatory violations.

Machine Identities in a Zero Trust Framework

With Zero Trust increasingly central to security strategies, machine identities play a critical role. In a Zero Trust model, no machine is assumed to be inherently trustworthy; every interaction requires authentication and verification. This approach is essential in today’s multi-cloud and hybrid IT landscapes, where machines frequently interact across potentially insecure networks. With technologies like mutual TLS (mTLS), machine identities enable secure communication between devices, ensuring that only authenticated entities can access critical resources.

In a Zero Trust framework, machine identities not only secure communication but also enable ongoing verification of interactions. This principle is foundational to establishing and maintaining trust, both for human and machine identities, within an organization’s digital ecosystem.

Secure Secrets Management – An Essential Pillar

Effective management of machine identities demands secure handling of “secrets” - API keys, SSH keys, certificates, and other credentials essential for authenticating machine communication. These secrets need to be stored securely, rotated regularly, and managed centrally to reduce human error and prevent misuse. Automated secrets management allows organizations to scale this process to handle the vast numbers of identities typical in a modern IT environment, ensuring that each identity’s lifecycle is managed securely from creation to deactivation.

Integrating secrets management into a comprehensive identity governance framework provides additional layers of security. This approach not only minimizes security gaps but also enforces consistent security practices across both human and machine identities.

Key Takeaways: The Essentials of Machine Identity Management Machine Identities as a Foundation for IT Security
Machine identities are indispensable for secure interactions and communications in modern IT environments. Scaling with Growth 
The exponential increase in machine identities demands robust, automated management to keep pace with this growth. Lifecycle Management for Security
Systematic management of identity lifecycles mitigates the risks posed by outdated or uncontrolled identities. Secrets Management to Close Security Gaps
Proper secrets management is vital for protecting machine identities and preventing security breaches. Integration with Identity Governance
Machine identities should be part of a unified identity governance framework to ensure consistent security policies. Accountability Through Ownership
Clear assignment of responsibilities is crucial for maintaining the security and traceability of machine identities. A More Precise Term for Identity Diversity
The term "machine identities" may need refinement to better capture the diverse range of non-human identities in today’s digital environments.

In short, machine identity management is not only critical but complex, requiring organizations to adopt structured, automated, and comprehensive approaches. In a world where machine interactions outnumber human ones, secure identity management is not optional - it’s essential.


SailPoint Atlas - Unified Identity Security Platform

by Nitish Deshpande SailPoint Atlas is a unified identity security platform that focuses on identity security by combining modern technologies such as AI and machine learning. A technical overview of the SailPoint Atlas is included in this KuppingerCole Executive View report.

by Nitish Deshpande

SailPoint Atlas is a unified identity security platform that focuses on identity security by combining modern technologies such as AI and machine learning. A technical overview of the SailPoint Atlas is included in this KuppingerCole Executive View report.

auth0

Your B2B SaaS App Just Got Better

Machine-to-Machine Access for Organizations reaches General Availability (GA), unlocking SaaS APIs for developers
Machine-to-Machine Access for Organizations reaches General Availability (GA), unlocking SaaS APIs for developers

Northern Block

A Summary of Internet Identity Workshop #39

Highlights from IIW39, which took place between October 29th and 31st, 2024, at the Computer History Museum in Mountain View, California. The post A Summary of Internet Identity Workshop #39 appeared first on Northern Block | Self Sovereign Identity Solution Provider. The post A Summary of Internet Identity Workshop #39 appeared first on Northern Block | Self Sovereign Identity Solution Provid

(Images used in banner courtesy of Ankur Banerjee, @ankurb)

 

Introduction

Below are my personal highlights from the Internet Identity Workshop #39, held from October 29–31, 2024, at the Computer History Museum in Mountain View, California. The Internet Identity Workshop (IIW) is a one-of-a-kind, unconference-style event that gathers professionals across the digital identity space to openly discuss, debate, and innovate. IIW39 set a record for attendance, with 178 sessions, giving us the opportunity not only to stay up-to-date but also to contribute through sponsorship and active participation, reinforcing our commitment to this evolving field.

Images courtesy of Internet ID Workshop (@idworkshop)

Our team left inspired by the range of perspectives and in-depth conversations and are excited to share some of the key takeaways relevant to digital credential ecosystems. To organize the insights, I’ve grouped the most impactful sessions into three themes: trust establishment, adoption, and tech stack updates. These themes helped me categorize sessions that stood out and offered valuable perspectives for our work in digital credentials, wallets, and trust establishment infrastructure.


#1 – Trust Establishment

This IIW featured many discussions around governance, trust registries and trust establishment.

Progressive Trust in Issuer Registries with LinkedClaims

This session explored the concept of “progressive trust” in issuer registries, where entities can initially join a trust registry with minimal requirements and gradually build their credibility over time by adding claims. LinkedClaims was proposed as a potential solution to enable this approach, allowing ecosystem participants to add claims to a trust registry incrementally, thereby increasing their level of assurance as they demonstrate further compliance or meet additional standards. By setting low initial barriers for inclusion, this model supports a more accessible and open ecosystem, where entities can start with a basic level of trust and enhance it progressively. This approach provides an inclusive framework for building transparency and encouraging a steady flow of verifiable claims, enabling credentials to gain broader acceptance across different ecosystems as entities solidify their trustworthiness.

 

Well-Attended Discussion on Bridging Trust: DIDs, DNS, and X.509

Another session that ultimately brings trust establishment into the discussion was focused on creating layered assurance by bridging decentralized identifiers (DIDs) with established infrastructures like DNS and X.509. This hybrid approach allows any entity—not just credential issuers—to build more assurance by combining DIDs with established, trusted systems. This setup is particularly valuable for organizations with a strong digital presence, as it lets them leverage existing DNS or certificate frameworks to increase the assurance of their identity or credentials. We’ve already implemented this concept with DNS bridging in our IETF draft on High Assurance DIDs with DNS, demonstrating how entities can use this approach to create dependable, transparent interactions. As one of the co-chairs of the High Assurance VID Task Force (HAVID), I’m actively engaged in advancing this approach, proving that layered trust realms can support higher assurance in decentralized ecosystems.

A diagram provide by Dr. André Kudra which was showed in the IIW session

European Union Digital Identity Wallet (EUDI Wallet) Relying Party Authentication

The topic of relying party authentication for the EUDI Wallet sparked enough discussion to span two sessions. The first session on day 2 raised several open questions around the best approach for authenticating relying parties, leading to a follow-up session on day 3 to further unpack the issues.

One of the key points in discussing EUDI Wallet’s architecture was the requirement for relying parties to provide certain data about themselves to the wallet and, by extension, to the holder. This requirement, stemming from the EU’s eIDAS regulation, allows the holder to have insight into what data a relying party wishes to access and how they intend to use specific credentials. This transparency is essential for enabling informed decisions by the holder and safeguarding data privacy.

Various technical options were explored for implementing this authentication, including traditional X.509 solutions, OpenID Federation, and SD-JWTs (selective disclosure JSON Web Tokens). Each approach has unique strengths and challenges, with OpenID Federation emerging as a flexible option for interoperability. However, concerns around the complexity of the OpenID Federation specification led to discussions on simplifying or segmenting it to make it more accessible, particularly for the EUDI Wallet context.

Northern Block has been actively investing in implementing OpenID Federation across our solutions, aligning with the standard’s potential for fostering trust and interoperability in digital credentialing. Yesterday, on November 6, 2024, we presented an update at a Findynet-hosted event, sharing insights on our progress. A recording of the session is available on the event meeting page for those interested in learning more.

Additionally, the sessions considered how OpenID Federation might integrate with the European Blockchain Services Infrastructure (EBSI) and other European trust establishment technologies, potentially serving as an abstraction layer to connect multiple verification methods. While OpenID Federation shows promise for trust establishment in the European context, the sessions underscored that simplifying the spec could be key to overcoming current barriers. There’s clear interest in OpenID Federation’s role in the European market, and as this work evolves, it could provide a streamlined path for cross-border compatibility and trust in digital credentials.


#2 – Adoption

IIW39 offered a strong forum to gauge the state of adoption in digital credentialing and examine what’s required to drive it forward.

 

“Has Our SSI Ecosystem Become Morally Bankrupt?”

In one of the very many thoughtful sessions at IIW39, Christopher Allen raised a challenging question: has the self-sovereign identity (SSI) ecosystem strayed from its founding principles? His blog on the topic served as inspiration for the session. Allen questioned whether current implementations are compromising core SSI values—such as existence, control, access, transparency, and protection—that were foundational to the concept of self-sovereign identity. Increasingly, we’re seeing the industry willingly delegate key functions to platform providers, often replicating centralized or federated models that limit user control and freedom.

As examples, Allen pointed to the rise of mobile driver’s licenses (mDLs) and DID implementations such as did:web. These approaches may gain traction through their ease of adoption and existing infrastructure but risk overlooking some key principles as mentioned above. This trend raises concerns about whether these solutions are being designed in a way that prioritizes control for platform providers rather than the individuals using them. Allen’s critique highlights how some modern implementations of SSI risk sacrificing these core principles for the sake of convenience or widespread adoption.

From my perspective, these principles remain the goal for myself, our company, and many collaborators in the industry. However, achieving true self-sovereignty in a scalable way involves navigating significant structural and funding challenges. 

Much like the internet was seeded by the U.S. government through projects like ARPANET, where initial government funding was critical to establishing its foundations, digital trust infrastructure requires substantial investment to reach critical mass. This foundational funding enabled others to build value on the internet through commercially driven models that continue to reshape society as a whole. Today, governments and large organizations—particularly those with a public benefit as their core mission—are often the only entities capable of making this level of investment, viewing digital trust infrastructure as a form of public infrastructure that justifies their funding.

But with funding comes influence. Governments and large entities exercise control over their constituents through controls (e.g., rules, laws, and regulations)—frameworks that don’t always align seamlessly with the digital world’s principles of openness and user autonomy. This creates a tension between the need for investment to build digital public infrastructure and the inherent incentive models these large entities operate under, where control and oversight are often prioritized. This represents a larger struggle in balancing innovation with institutional authority, especially as digital identity and trust infrastructure continue to develop.

In my view, balancing SSI’s principles with these real-world constraints isn’t an all-or-nothing endeavor. Each implementation should strive to maximize user control, privacy, and transparency, even if some trade-offs are necessary. The investments we’re seeing are undeniably driving amazing advancements, and it’s a matter of taking the best parts and continuously improving upon them. This isn’t a zero-to-one leap but rather a journey of chipping away at constraints, making incremental progress toward a digital world that aligns more closely with self-sovereign ideals.

This session was an important reminder for me—and for all of us in this space—not to lose sight of the vision and principles that brought us here. Even as we navigate complex environments, we must stay grounded in the values that underpin SSI, ensuring they remain central as we move forward, one step at a time.

 

Public Sector Momentum and Cross-Ecosystem Acceptance

There continues to be significant momentum in the public sector around digital credentialing, with the U.S., Canada, Europe, and other regions like Bhutan each advancing in their own unique ways. In the U.S., states are increasingly adopting mobile driver’s licenses (mDLs), with many offering digital driver’s licenses through platforms like Apple and Google Wallets, while others provide their own state-specific wallets. Similarly, Canadian provinces are moving forward with their own digital wallets, and the European Union is working toward nation-state-approved wallets as part of a cohesive digital identity strategy. Each region’s approach reflects key differences and nuances in the technical stacks and governance models across these public sector ecosystems. Bhutan’s launch of its National Digital Identity (NDI) project exemplifies how even smaller nations are adopting digital credentials, contributing to a global trend in verifiable credentials across public sector initiatives.

While the public sector is a key driver, there are notable differences in approaches across these regions. Organizations like the Global Acceptance Network (GAN) are essential in bridging these varied approaches, fostering cross-ecosystem compatibility through multiple sessions and discussions around trust establishment at IIW39. For readers interested in how GAN supports the adoption of verifiable credentials across sectors and regions, we recommend our recent podcast episode on GAN’s ecosystem, which delves into its development and vision.

For anyone seeking a lay of the land in public sector credentialing, Northern Block has a strong perspective from our work in both North America and Europe. Feel free to reach out to us for further insights into how digital credentialing is evolving in the public sector across these regions.


#3 – Technical Updates

With the rapid evolution of standards and interoperability frameworks, IIW39 highlighted some of the latest tech stack advancements that are shaping digital credential ecosystems.

 

Digital Credential Query Language (DCQL)

The Digital Credential Query Language (DCQL) proposes to offer a streamlined solution to the complexity of existing credential presentation models, presenting a simplified, structured approach to querying credentials. Developed as part of the upcoming Implementer’s Draft for OpenID4VP, DCQL is designed as an alternative to Presentation Exchange (PE), which, though flexible, has become complex and challenging to implement. With dependencies like JSONPath, regular expressions, and extensive schema filters, PE can be cumbersome and potentially insecure, especially in browser-based environments.

DCQL aims to address these issues by introducing a more straightforward, JSON-based syntax that is largely credential format-agnostic, allowing for simpler and faster implementation. By reducing optional elements and removing complex dependencies, DCQL lowers the technical barriers for organizations adopting digital credentials, making credentialing solutions easier to implement and scale. However, as the adoption of DCQL grows, it is expected to coexist with PE, creating a phase where both standards are in use. This dual adoption could lead to interoperability challenges, as some organizations might choose to implement only one standard. DCQL’s simplified approach thus highlights the need for careful handling of interoperability across digital identity ecosystems, especially where both PE and DCQL are expected to operate.

Although initially specific to OpenID4VP, DCQL’s adaptability has the potential for broader use, supporting a more consistent and accessible querying standard as digital identity implementations grow across ecosystems.

 

Google’s Zero-Knowledge Proof (ZKP) for Mobile Credentials

Google introduced an advanced, high-performance ZKP for mobile environments, which represents a significant breakthrough in privacy-preserving credentials. With this implementation, users can present specific claims without revealing additional data, aligning with SSI principles. The optimization of ZKPs for sub-second performance opens new doors for real-world use cases in identity verification. As this technology becomes more accessible, it could drive widespread adoption across industries that require privacy-centric solutions for sensitive interactions.

 

Revocation and Status Mechanisms Comparison

Managing credential status and revocation is essential, particularly for high-volume and regulatory-sensitive use cases. The session on revocation mechanisms provided a detailed comparative analysis of various approaches, evaluating them on key criteria such as scalability, privacy, security, and deployment readiness. These comparisons offer digital identity architects a clearer framework for selecting revocation methods that best align with their operational needs and compliance requirements. As digital credential ecosystems grow, a flexible approach to revocation—one that adapts to different regulatory environments and use cases—will be increasingly critical. For more details, you can view the session slides here.

 

Conclusion

IIW39 consistently provides a lens into the current adoption cycle and maturity of digital credential and wallet ecosystems. As digital identity continues to grow, events like IIW serve as critical forums to assess the evolving landscape of digital credentials, standards, and wallet functionalities. For organizations navigating this space, these insights highlight the importance of transparent governance backing credentials and ecosystems, practical adoption strategies, and streamlined technical solutions that simplify yet secure digital interactions.

I hope this summary was useful to readers. As always, feel free to reach out to me directly at mathieu@northernblock.io or connect with me on LinkedIn if you’d like to discuss these topics further. We’ll be attending the next Internet Identity Workshop, IIW40 (IIWXL), in Spring 2025 from April 8 to April 10, and we urge anyone who finds this discussion interesting to consider joining us there.

–end–

The post A Summary of Internet Identity Workshop #39 appeared first on Northern Block | Self Sovereign Identity Solution Provider.

The post A Summary of Internet Identity Workshop #39 appeared first on Northern Block | Self Sovereign Identity Solution Provider.


Dock

Reusable KYC: What it is, benefits and impact on ID companies

The current landscape of Know Your Customer (KYC) processes is marked by inefficiencies that create friction, drive up costs, and frustrate users. Customers are required to repeat KYC procedures every time they engage with a new service, even if the same KYC provider is behind the scenes.  This leads

The current landscape of Know Your Customer (KYC) processes is marked by inefficiencies that create friction, drive up costs, and frustrate users. Customers are required to repeat KYC procedures every time they engage with a new service, even if the same KYC provider is behind the scenes. 

This leads to high drop-off rates, as customers lose patience with slow, redundant processes. 

Reusable KYC offers a transformative approach by allowing users to complete KYC once and reuse their verified identity across multiple services, significantly enhancing the user experience and operational efficiency for businesses.

In this article we’ll go through what Reusable KYC is, its benefits and how it can be enabled by centralized and decentralized technologies.

Let's dive in!

Full article: https://www.dock.io/post/reusable-kyc


Ocean Protocol

DF114 Completes and DF115 Launches

Predictoor DF114 rewards available. DF115 runs Nov 7 — Nov 14th, 2024 1. Overview Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor. Data Farming Round 114 (DF114) has completed. DF115 is live today, Nov 7. It concludes on November 14th. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE&nbs
Predictoor DF114 rewards available. DF115 runs Nov 7 — Nov 14th, 2024 1. Overview

Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor.

Data Farming Round 114 (DF114) has completed.

DF115 is live today, Nov 7. It concludes on November 14th. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF114 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF115

Budget. Predictoor DF: 37.5K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and Predictoor

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF114 Completes and DF115 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


PingTalk

What is Biometric Authentication? Methods & Security Features

What is biometric authentication? Explore methods, effectiveness, and security features to see if it’s right for your organization.

 

As our world becomes increasingly digital, there is a growing need for more secure identity verification methods to replace the faulty password security that is still widely used. 

 

Biometric authentication has emerged as a strong method for safeguarding network and facility access, with a wide range of possible applications from healthcare to hospitality and nearly all industries in between. 

 

Offering enhanced security while providing users with a more streamlined log-in experience, it’s likely that many of us already use biometric authentication in our day-to-day lives, even if we weren’t aware of it. 

 

Throughout this article, we will explore the different biometric technology methods available, their security features, and help you decide if it’s a suitable option for your organization.

Wednesday, 06. November 2024

UbiSecure

Tips for designing your Sign-In

In today’s digital landscape, offering only one sign-in method, such as a username and password, is no longer sufficient to meet the... The post Tips for designing your Sign-In appeared first on Ubisecure Digital Identity Management.

In today’s digital landscape, offering only one sign-in method, such as a username and password, is no longer sufficient to meet the diverse needs and expectations of users. As technology evolves and global markets expand, it’s imperative for websites and apps to provide multiple, secure, and convenient login options. By doing so, businesses can enhance user experience, improve accessibility, and strengthen their competitive position. In this blog post, we’ll explore the numerous benefits of offering alternative login methods, from wider device support and increased security to enhanced user satisfaction and operational efficiency.

Providing alternative login methods has many benefits for users and online service providers:

Wider user reach
Different regions, countries and user groups all have their most preferred way to login to consumer or business services. For example, BankID in Scandinavia, Finnish Trust Network in Finland, LINE in Japan, WeChat in China etc. Not offering the common services for the region where you users are coming from can limit service adoption and sales. Access to download various smartphone authenticator applications may also be limited to certain app store regions. Protection from lost sales and lost business
The inability to log in, for whatever reason – technical or non-technical, creates user frustration, delays and often times lost business. For an end-user, it can be easier to log in to a competitor’s service than to work out why login to your service is failing. When passwords are forgotten or text messages never arrive, having alternative options on offer increases the chance of the user signing in without any further assistance required. In the same way online shopping carts are abandoned when payment services are too difficult to use, users who can’t log in never even get a shopping cart in the first place. Operating cost management
Some third-party authentication services can add significant operating costs as the number of login events increases. Costs associated with an identity provider service may be easier to negotiate if the there are alternatives already available to use. Where there are multiple authentication options, these can be presented in an order that encourages the selection of the most cost-effective option. Technical redundancy
Imagine that the authenticator app or email client that you use continually crashes for some reason due to an unexpected mobile operating system update. Unable to click on a notification or get a generated one-time password, you are locked out of your account. Sometimes login systems are down for maintenance, upgrades, network issues or because of unforeseen difficulties. In these cases, instead of contacting support, choosing the button to sign in using an alternative provider is faster and easier. This lets the user solve their own login problems without any support burden and related costs. Wider end-user device support
Providing only “Sign in with Apple” or “Sign in with Google” makes things difficult if the user ever leaves the respective Apple or Google ecosystem, even if your app or service is targeted certain platform users only. Some organizations even have policies that forbid their employees from using non-corporate login systems for business use. Users could be shut out from accessing their personal information or historical records. Supporting multiple sign in methods enables users to securely access their data if they change devices or operating systems. Dealing with life’s little surprises
Consider the situation where SMS one-time password is the only MFA option, but the SMS never arrives, due to network failure, being out of network range, having a flat battery, a broken screen, lost or misplaced phone or service subscription halted due to an unpaid phone bill. It’s nice to have another way to sign in in these cases. Improved accessibility
For users with disabilities, the ability to use the authenticator or identity provider of their own choice can allow them to access online services without assistance. Different authenticators suit different users, some don’t work at all for parts of the user community. End-user device compatibility
Access to download various smartphone authenticator applications may also be limited to certain app stores, be region locked or be incompatible with user devices in the field running older operating systems. Helping to avoid unwanted surveillance
Repeatedly logging in via the same identity provider has the potential to inadvertently allow tracking your behaviour closely. By using different providers, or choosing authentication methods that are not inherently traceable by third-parties, users are empowered to choose freely in order to protect their own privacy. Avoiding identity provider lock-in
If there is a data breach or other security event at an upstream identity provider, immediately disabling it is the fastest approach to avoid collateral attacks. Disabling a provider is easy when there are many other alternatives still available to use. Service continuity readiness requires planned, ready-to-go contingencies. Identity providers can also cease operating at short notice for other commercial or legal reasons. Do not keep your eggs in one basket. Diversifying the range of sign in options mitigates the risks of individual solutions. Meeting compliance requirements
Depending on the nature and jurisdiction of the application, where sensitive, private and/or personal information is processed, compliance with relevant security, privacy and usability legislation is mandatory. Different types of transactions may require different authentication techniques mandated in legislation. This legislation can change over time. Being able to add and change authentication methods easily makes staying compliant easier. A good example is the European Digital Identity Framework, which will see the roll out of digital identity wallets for European citizens in the coming years. Public sector services and certain industries will be forced to allow sign in using these new wallets. Ready for the future
Technology and legislation is changing at a rapid pace. Authentication protocols, products and techniques adapt to these changes. Being ready for new trends and changes in user expectations with regard to sign-in techniques requires that applications can easily add, remove or change the sign in methods offered. Adding newly emerging biometric authentication, authentication methods based on quantum-resistant cryptography solutions or emerging AI-supported authentication tools should be a matter of reconfiguration rather than application redesign. Designing and planning for multiple sign-in methods with best practices

Fortunately, many commercial software applications today are designed to support externalised user authentication and authorization. These applications can be configured to be connected to a Identity Provider Broker, either hosted in the cloud, or run locally on-premise. This Identity Provider Broker, or IdP Broker for short, is responsible for the secure communication with various identity services and authentication methods. It is responsible to present a list of the various different login options and all of the complex logic to integrate with these methods and services.

When planning the design of a new online service, the product manager, architect or product owner should insist that user authentication is performed outside of the application itself. This is sometimes called single sign-on (SSO) support, federated identity support, externalised identity or referenced using the terms of related protocols, like OAuth, OAuth2, OpenID Connect or SAML. It accelerates product development and simplifies the logic of the online service.

Even older, legacy applications and services can be modified to replace built-in authentication options with externalised authentication with minor application changes.

Supporting multiple sign in methods is a first step

Once authentication has been externalised and multiple sign in methods are supported, this opens the doors to other powerful functions that can enhance user experiences:

Support for teams and groups
An external identity provider can also provide information to an application about an individual’s membership to an organisation, be it a company, team, club or family. This enables convenient sharing of information and responsibilities within an online service. Cross-organisation collaboration and information sharing
Sharing is not limited to your own organisation – information can be gathered from or distributed to users at other organizations – such as partners, suppliers, customers and sub-contractors. An application that is integrated with an externalised identity management system can get and access to these rich connections and permissions without building it all into their own service. Performing tasks on behalf of someone else
Often times, the person using an online service is doing something on another person’s behalf. It may be a consultant helping a client to get things done or an adult doing something for their elderly parents, or a care-giver assisting a person in need. This should not be done by sharing sign in credentials, rather by authorising the other party to do these tasks. Performing tasks on behalf of another organisation
In business, outsourcing of certain functions to another organisation is commonplace. These partners need access to the client firm information and tools provided by online services. This can be achieved through externalised authorization. Do you need help adding more authentication and authorisation options to your online service?

Ubisecure offer software and services to allow your customers to sign in using the authentication method that they choose, from a range of options that match your security choices. Different ways to sign in can be added or removed as requirements and markets change. Support for teams, groups and on behalf of use cases can be added to new and existing services. Contact Ubisecure today for more information and a no-obligation demonstration.

The post Tips for designing your Sign-In appeared first on Ubisecure Digital Identity Management.


Spruce Systems

Meet the SpruceID Team: Dani Johnson

Dani, Head of Operations at SpruceID, brings extensive experience in managing a wide range of responsibilities, from finance to people operations.
Name: Dani Johnson
Team: Operations
Based in: Seattle, Washington About Dani

I’ve worked in business operations throughout my career, and SpruceID is my second software startup. I wanted to work on the most challenging and innovative technology I could find. When I found SpruceID it felt like a perfect fit: a great home for my existing skills where I could have a broad portfolio of responsibilities, as well as an exciting set of fresh challenges.

Can you tell us about your role at SpruceID?

As the Head of Operations, I manage accounting and finance, people operations, compliance, and the rhythm of business. I work with our outstanding legal and accounting teams and oversee financial audits and SOC 2 audits. I also work closely with our CEO and lead special projects of all shapes and sizes.

What do you find most rewarding about your role?

My role is always evolving to cover new ground, so I always have something new to learn. At SpruceID I have access to so many expert minds, and it is incredibly rewarding to be able to soak up new subject matter expertise on a regular basis.

What are some of the most important qualities for someone in your role to have, in your opinion?

Integrity, drive, and an intensely meticulous and organized nature. I was one of those little kids that always colored inside the lines. My plastic dinosaurs were in order on the shelf.

What are you currently learning, or what do you hope to learn?

I am currently working a lot on international initiatives, so I am learning about corporate establishment, banking, contracting, and employment in some jurisdictions outside the US. Fascinating and sprawling.

What has been the most memorable moment for you at SpruceID so far?

Some of my most treasured SpruceID memories are of experiences we’ve had as a team at our team gatherings. Scrambling to get the wifi working in our ad hoc offices in Kyoto, eating together in one of the shacks on Copacabana Beach, singing along to Irish traditional folk music in Dublin.

What is some advice that you’d give to someone in your role who is early in their career?

Be worthy of the trust your organization has in you.

How do you define success in your role, and how do you measure it?

When the organizational operations are running smoothly it frees up the rest of the team to innovate and explore, so in some ways I measure success in my role by how little everyone else needs to think about it. Like great service at a restaurant, you don’t really notice it, you just notice that you have what you need. That is my goal.

Fun Facts


What do you enjoy doing in your free time?: Traveling near and far, cooking, eating, and taking long audiobook walks.

What is your favorite coding language (and why?): Rust, of course!

If you could be any tree, what tree would you be and why?: Any kind that involves animal visitors. Pinyon pine for bear visitors, or one of those argan trees the goats climb. A big fir tree for owl and squirrel friends would be fine.

Interested in joining our team? Check out our open roles and apply online!

Apply to Join Us

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


Trinsic Podcast: Future of ID

David Kelts - From Idemia to Decipher Identity and the Evolution of Mobile IDs

In this episode of The Future of Identity Podcast, I’m joined by David Kelts, a leader in digital identity and mobile ID initiatives, with a career that spans significant contributions across multiple companies and initiatives worldwide. David's insights shed light on the journey of mobile driver’s licenses (mDLs), the evolution of identity verification, and his current role at Decipher Identity,

In this episode of The Future of Identity Podcast, I’m joined by David Kelts, a leader in digital identity and mobile ID initiatives, with a career that spans significant contributions across multiple companies and initiatives worldwide. David's insights shed light on the journey of mobile driver’s licenses (mDLs), the evolution of identity verification, and his current role at Decipher Identity, where he’s tackling adoption challenges and working with businesses to expand use cases for digital identity.

We explore:

- David's early work at Idemia, including pioneering efforts in connecting driver’s licenses to online identity proofing.
- The origin and adoption challenges of mobile driver’s licenses (mDLs) and why adoption has lagged behind expectations.
- Privacy concerns surrounding digital IDs and the misconception of "phone home" tracking in mobile identity, along with how privacy regulations are influencing this space.
- The role of standards organizations and government agencies, like AMVA and TSA, in fostering privacy and security in digital credentials.
- The future vision for digital identity, including the potential for digital-native identity credentials, cross-border use cases, and the value of user choice in secure digital wallets.

David also shares stories from working directly with states like Utah and California on mDL projects and reflects on what’s needed for broader adoption. This episode is a deep dive into the evolving landscape of digital identity and is perfect for anyone interested in the future of authentication, privacy, and user-centric identity solutions.

You can learn more about Decipher Identity at decipher.id.

Subscribe to our weekly newsletter for more announcements related to the future of identity at trinsic.id/podcast

Reach out to Riley (@rileyphughes) and Trinsic (@trinsic_id) on Twitter. We’d love to hear from you.


IDnow

Exploring the customer onboarding differences in global gambling markets.

Are you ready to play? We explain why gambling operators should always do more than the bare minimum when onboarding players. The gambling industry has never been more lucrative.   Bolstered in large part by the proliferation of easily accessible online gambling platforms, the industry was worth over $85 billion dollars last year. By 2029, that figure […]
Are you ready to play? We explain why gambling operators should always do more than the bare minimum when onboarding players.

The gambling industry has never been more lucrative.  

Bolstered in large part by the proliferation of easily accessible online gambling platforms, the industry was worth over $85 billion dollars last year. By 2029, that figure is expected to leap to $133 billion. There are multiple reasons for the industry’s projected rapid growth, including improved access to high-speed internet and reliable payment systems. Another is the opening of numerous currently unregulated markets around the world, especially in Latin America. 

While for the consumer, playing online casinos, online lotteries or online sports betting may be a similar experience across countries, there are certain differences in onboarding dictated by specific regional regulatory environments. 

Of course, for the customer, this all happens in the background so is rarely a consideration. Gambling operators, however, especially those that are keen to expand into new territories should be aware of every nuance of the customer onboarding process, every regulatory environment, every document or data point required to be compliant and every product functionality that could make the customer experience more inclusive, intuitive and secure. 

In the ‘Worth the risk: Why gambling operators should always do more than the bare minimum when onboarding players’ ebook available below, we explore the different customer onboarding journeys from some of the world’s most popular gambling markets, including the United Kingdom, Brazil, Ontario and many more.

How gambling operators should really be onboarding players. Download to discover: Most common fraud attacks that gambling operators were subjected to in 2023. Size of some of the most popular global gambling markets. The steps required to legally onboard players in nine different countries Download now

While onboarding requirements differ across regions, at IDnow, we believe it is crucial to protect vulnerable individuals and verify that customers are who they claim to be, regardless of the market.

Roger Redfearn-Tyrzyk, VP of Global Gaming.
Know Your Player?

Although the gambling industry has never offered more opportunity, it has also never been under greater regulatory scrutiny, or at greater risk of fraud attacks or bonus abuse. 

Other challenges facing operators include the ever-increasing cost of player acquisition, and the need to comply with AML, data privacy and responsible gambling requirements. Not to mention that, like most industries nowadays, there is a user expectation for seamless, secure and frictionless 24/7 online experiences. The optimal time to address these challenges is at the beginning, during the identity verification and customer onboarding process.

The role of the gambling regulator.

The regulator’s role is to protect players. They do this by devising and regularly revising regulations and issuing fines for operators that do not comply.

As new jurisdictions open and regulators implement tighter control mechanisms, the number of gambling fines are only set to increase. In fact, in 2023, the global industry saw a record number of fines ($402 million), with UK operators subjected to the most fines, followed by Australia, Ontario, the Netherlands and the US. 

Creating regulations that work for the player, the opertaor and the national. market is no easy feat. For example, if taxes are too high or player limitations too strict, then this could push players to the black market. When this happens, governments do not benefit from additional tax and players are not protected. Striking a balance is essential.

Do more than the bare minimum when onboarding players.

At IDnow, we value our trusted relationships with regulatory bodies from around the world. It is these connections and this multi-jurisdictional expertise that allow us to empower operators to confidently navigate onboarding challenges, wherever they are based. 

“To enhance security and minimize risk, we recommend going beyond the basic identity checks by integrating additional screening measures early in the customer journey. Implementing these checks early, ideally before withdrawal, provides better protection and reduces the risk of fraud, safeguarding both customers and businesses from unnecessary exposure to financial harm,” added Roger.

Our layered, holistic approach to identity verification enables operators to add additional layers of assurance by offering a flexible solution tailored to risk appetite and regulatory needs. These layers include a range of verification checks, from data checks and financial risk assessments to biometric and video verification. 

With the ability to scale up or down in line with a country’s specific regulatory needs, IDnow ensures operators maintain robust protection against fraud and other risks, while delivering a seamless and compliant customer journey.

By

Jody Houton
Senior Content Manager at IDnow
Connect with Jody on LinkedIn


Caribou Digital

Meeting the target and missing the point: Putting society at the center of digital public…

Meeting the target and missing the point: Putting society at the center of digital public infrastructure Written by Jessica Osborn — CEO, Emrys Schoemaker —Senior Director of Advisory & Policy, and Niamh Barry — Senior Director of Measurement & Impact, all at Caribou Digital. Alongside this year’s World Bank and IMF Annual Meetings, and following the insightful Co-Develop DPI Summit
Meeting the target and missing the point: Putting society at the center of digital public infrastructure

Written by Jessica Osborn — CEO, Emrys Schoemaker —Senior Director of Advisory & Policy, and Niamh Barry — Senior Director of Measurement & Impact, all at Caribou Digital.

Alongside this year’s World Bank and IMF Annual Meetings, and following the insightful Co-Develop DPI Summit in Cairo earlier in the month, Caribou Digital participated in several conversations on the social and economic impact of digital public infrastructure (DPI).

Together, these events demonstrated a welcome shift in the conversation toward the importance of putting people at the center of DPI’s design and implementation in order to increase adoption and use. Yet, this people-centric approach seems more nascent in discussions of DPI’s measurement and impact, which still often centers on institutional efficiency and access. While these are important goals (and often the initial impetus for DPI implementation), by omitting nuanced consideration of people-level impact we risk — at best — missing an opportunity for DPI to drive more meaningful development outcomes and — at worst — DPI causing people harm. Digital transformation affects the lived experiences of citizens in very real ways, and by bringing into view goals on inclusion, agency, and empowerment, we uncover a whole range of metrics that must be considered to ensure that the impact on people’s lives is positive. The need to build an efficiency-based investment case for DPI should not trump the need to build the human impact case.

DPI’s outcome problem: A “shared means to many ends”

That people are underrepresented in the conversation on DPI measurement is symptomatic of the fact that, while there is growing consensus around the “whole of society” approach to DPI implementation, this is still nascent when it comes to measuring DPI’s impact. DPI is an emergent system that is deeply interconnected, and as such it requires a systems-level theory of change and measurement approach.

The description of DPI as “a shared means to many ends” highlights the numerous possibilities of use and, therefore, the numerous potential outcomes for different actors within a given system — government, civil society, private sector, businesses, households, and individuals. These are connected actors; thus, impact and information flows are also multidirectional.

As a DPI community, we have many reasonable hypotheses (see Caribou’s illustrative examples below) but not a coherent narrative on the multitude of outcomes that DPI — in its diverse forms — could enable. A shared understanding of DPI’s potential outcomes for different system actors could unlock multi-stakeholder collaboration on the “right measures” and mitigate the risk of misalignment and diminished effectiveness. Investing time in defining outcomes is crucial, ensuring they reflect the voices and needs of all stakeholders. Only then can metrics that genuinely serve these outcomes be defined.

Caribou’s illustrative examples of DPI outcomes

Metrics align intention and value
“When a measure becomes a target, it ceases to be a good measure.”
Goodhard’s Law

Metrics are useful ways of measuring outcomes — but only when they are aligned with a broader understanding of potential impact. Fundamentally, outcomes are expressions of what is valued; they reflect intention and galvanize collective action around what gets measured. A focus on misaligned outcomes can have lasting and challenging real-world effects; the financial inclusion sector’s fixation on account access, exacerbated by global measurement tools like Findex, is a case in point. The onus is on us as a development community to ensure an inclusive and “whole of society” approach to defining and measuring the changes that can result from DPI to drive genuinely inclusive and meaningful impact.

Who gets to define these outcomes is also a critical question involving power dynamics that influence whose voices and needs are prioritized. DPI is necessarily a state-driven initiative, but it implies a rearticulation of at least a triad of relationships in the social contract: between the state and individuals, between individuals and the market, and between the state and the market. There are power dynamics and deeply held (and sometimes contested) values underpinning all three relationships, pointing to the complexity and necessity of involving all stakeholders in finding common ground and defining outcomes that matter.

A moment in time for DPI measurement

C. V. Madhukar has said that we are at a unique moment in time in digital transformation. This unique moment offers an opportunity for the development community to align key stakeholders on a common set of DPI outcomes and the right metrics to measure those outcomes. These metrics could: 1) provide a clearer understanding of the benefits DPI delivers to different groups; 2) reveal the risks of DPI, so that products and services can course-correct, and 3) enable comparisons between approaches that could help define “Good DPI”, akin to the influential efforts to mobilize consensus around “Good ID.”

This clarity could guide funding decisions and channel resources toward solutions with the greatest potential for impact. Defining such a measurement framework requires a systems-focused theory of change that incorporates individuals, businesses, civil society actors, and governments, and that is underpinned by a critical synthesis of the existing evidence (in this regard, DIAL and Co-Develop’s forthcoming DPI Evidence Compendium is an excellent first step).

Digital development measurement practices can show the way

While the multifaceted nature of DPI presents a measurement challenge, we are not starting from scratch. As a digital development community, we have learned a great deal from measuring digital initiatives, and these form a valuable knowledge base from which to start. Some key learnings:

Prioritize outcomes over adoption metrics. Measurement systems reflect values and intentions, and we must prioritize outcomes tracking alongside — easily and digitally obtained — adoption tracking to ensure that decision-making extends beyond access and use. Funders and implementers should measure the change they want to see in order to drive inclusive impact. Based on their extensive experience supporting DPI implementation, Public Digital’s call to measure value from the perspective of service users is an important reminder to focus on outcomes. Building on this, we could also draw on Amartya Sen’s influential “human capabilities” approach, as well as C. V. Madhukar’s emphasis on societal capabilities to consider outcomes from a multi-stakeholder perspective. To make a compelling case for DPI, it must be clear that DPI makes a real difference in the public’s lives and that there must be a swift response to any harm — something that matters to politicians, policymakers, planners, implementers, and, most importantly, people. Adopt a systems-focused, complexity-aware theory of change. DPI warrants a systems-led, complexity-aware theory of change and measurement framework informed through system mapping, evidence synthesis, and deep and wide stakeholder consultation. As DPI is both ever-dynamic and advancing rapidly, theories of change must also evolve continuously. This approach should consider both opportunities and risks for various actors engaging with DPI. Without identifying all sides, we risk a one-sided view of impact, potentially overlooking significant risks to different stakeholders. Developing a nuanced and adaptive theory of change can support DPI to be responsive, equitable, and impactful for all involved. Embed iterative measurement within tech systems. Data on metrics can often be captured in real time using digital solutions themselves, enabling feedback loops that drive continuous improvement. Such cost-efficient embedded measurement and adaptive management approaches can ensure that DPI initiatives focus on delivering public value beyond deployment and adoption. Utilize a multi-method approach. Iterative measurement (above) may need to be triangulated with other instruments in order to capture all required data. Findex-type survey data may be required to gather some data points. Additionally, literature measurement can act as a “purpose navigator,” ensuring that deployments deliver tangible public benefit. DPI impact at a societal scale requires collective action

By building consensus on the outcomes that matter and metrics that measure those outcomes — particularly as they reflect the lived experiences of those impacted — DPI can support inclusive growth, empower individuals, and deliver societal-scale transformation.

The knowledge, tools, and momentum to make a real difference exist, but impact requires collective action and a shared vision.

Please reach out to Jess (jess@cariboudigital.net), Emrys (emrys@cariboudigital.net), or Niamh (niamh@cariboudigital.net) if you would like to discuss our thinking further.

Meeting the target and missing the point: Putting society at the center of digital public… was originally published in Caribou Digital on Medium, where people are continuing the conversation by highlighting and responding to this story.


Ontology

Ontology Weekly Update: October 28th — November 4th, 2024

Ontology Weekly Update: October 28th — November 4th, 2024 Welcome to our latest weekly roundup! This week has been packed with exciting news, community growth, and major engagements. Read on to catch up on what’s been happening across the Ontology Network. Ontology 🌐 Partnership with Soonverse Announced! We’re thrilled to announce our new partnership with Soonverse! This collaboratio
Ontology Weekly Update: October 28th — November 4th, 2024 Welcome to our latest weekly roundup! This week has been packed with exciting news, community growth, and major engagements. Read on to catch up on what’s been happening across the Ontology Network. Ontology 🌐 Partnership with Soonverse Announced!

We’re thrilled to announce our new partnership with Soonverse! This collaboration marks a significant step forward for Ontology as we continue to expand our capabilities and build stronger, decentralized solutions for our users. Stay tuned for more details on how this partnership will unlock new opportunities in the Web3 space!

Humpty’s Workshop at Hack Seasons in Bangkok

Heading to Bangkok? Don’t miss Humpty’s workshop at Mpost’s Hack Seasons! This workshop will cover vital aspects of decentralized identity and blockchain development. Join us for hands-on learning and insights directly from the heart of the Web3 community.

Mozo AMA: A Record-Breaking Session!

Our AMA with Mozo was a great success, drawing 15.7k participants who tuned in for an in-depth discussion on blockchain innovation and Ontology’s future. A huge thank you to everyone who joined and made it an engaging session!

Community 🌍 Privacy Hour and Weekly Discussions

This week, our regular Privacy Hour was held, where community members gathered to discuss the latest trends and developments in Web3 privacy practices. These sessions continue to foster invaluable conversations that highlight the importance of data security and decentralized identity.

We also had a vibrant weekly community discussion, connecting members and sharing ideas. The enthusiasm and active participation showcased by our community reinforce why we prioritize these engagements.

Stay Connected 📱

Your involvement fuels the growth and innovation of Ontology. Make sure to stay connected with us on all our platforms to participate, stay informed, and contribute to the decentralized future we’re building together.

Ontology website / ONTO website / OWallet (GitHub) / Twitter / Reddit / Facebook / LinkedIn / YouTube / NaverBlog / Forklog / Telegram / Announcement / Telegram English / GitHubDiscord

Ontology Weekly Update: October 28th — November 4th, 2024 was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


Okta

Introduction to the Okta Integration Network

Whether or not you use Okta’s products, you may find yourself working on software whose target audience includes Okta customers. Adding your application to the Okta Integration Network creates a smoother and less error-prone user management experience for these shared customers, and can unlock the potential of additional features as well. For a high-level perspective on the benefits of building

Whether or not you use Okta’s products, you may find yourself working on software whose target audience includes Okta customers. Adding your application to the Okta Integration Network creates a smoother and less error-prone user management experience for these shared customers, and can unlock the potential of additional features as well.

For a high-level perspective on the benefits of building to the open standards supported by the OIN, which also lets you easily support any other identity provider’s integration marketplace, here’s Director of Identity Standards Aaron Parecki:

And to learn about what the integration submission process looks like on a more technical level, the OIN 101 Walkthrough can help:

Check out Okta’s Saas Security page and integrator help hub for more resources.

Follow OktaDev on Twitter and subscribe to our YouTube channel to learn about additional integrator resources as soon as they’re available. We also want to hear from you about topics you want to see and questions you may have. Leave us a comment below!

Tuesday, 05. November 2024

1Kosmos BlockID

Digital Identity Spotlight: Thailand

The nation of Thailand has a ready response for governments around the world seeking insights on implementing digital identity at scale: Phuket. In recent years, the Thai island paradise of Phuket—long known for its pristine beaches, stunning waterfalls, and vibrant nightlife—has transformed itself from a resort town to a smart city. Its thriving technology sector … Continued The post Digital Id

The nation of Thailand has a ready response for governments around the world seeking insights on implementing digital identity at scale: Phuket.

In recent years, the Thai island paradise of Phuket—long known for its pristine beaches, stunning waterfalls, and vibrant nightlife—has transformed itself from a resort town to a smart city. Its thriving technology sector and “smart, safe, sustainable” approach to governance have become a prime model and critical test market for the nation’s expansive Thailand 4.0 strategy. This 20-year economic development plan is designed to turn this Southeast-Asian country of more than 70 million people into a high-tech, high-income powerhouse, supported and enabled by digital identity.

To that end, Phuket has become a pilot region for Thailand’s new digital identification and verification infrastructure—and for good reason. The city’s tourism sector provides an ideal proving ground for using digital identity to verify visa applications, travel bookings, and access to local services in a seamless, all-digital manner. Since launching 16 months ago, the test has been a trial by fire. But it’s one that Phuket’s tech-savvy population is well-positioned to navigate and help refine.

In Phuket, tourists, expats, and locals use a mobile app called ThaID (as in Thai-ID) to register for banking and healthcare services. But the system also has other purposes. To crack down on counterfeit ID cards that have long plagued Phuket’s bustling nightlife venues, this facial biometrics-based mobile digital ID is now required to gain entry to the city’s clubs and bars. Yet, for all their utility, these and other early applications are just a glimpse of what digital identity has come to mean for this nation.

Phuket, Let’s Go: When Digital Identity Is More Than Just Tech

Thailand’s ambitious digital identity initiative is about more than just financial inclusion, ensuring access to services, and securing against mounting cyber threats. In recent months, it has become emblematic of a nation set on reasserting its identity as a hub of digital innovation—and reigniting an economy lagging its regional neighbors.

In recent years, Thailand’s growth has stagnated. Even as per capita income in China, Singapore, and Malaysia has soared, Thailand has struggled to escape what the World Bank’s 2024 Development Report describes as a “middle-income trap.” A vital component of this predicament is an average annual growth rate hovering around 3% for nearly 30 years, compared to China’s average of 8.86% and Singapore’s 6.18%.

Roughly 531 miles north of Phuket, Thailand’s capital city of Bangkok is crafting a far more promising narrative. Modern skyscrapers, luxury hotels, high-end shopping centers, and world-class restaurants abound. Importantly, strides made by Thailand’s robust technology sector increasingly mirror Phuket’s. Over the past year, investment in artificial intelligence, data analytics, cloud computing, and cybersecurity, for instance, has contributed to the sector’s 12.8% growth rate. In October, Bloomberg reported that Nvidia Corp. plans to invest heavily in Thailand, joining Alphabet Inc. and Microsoft in building data centers and component manufacturing plants here.

Thailand 4.0 is designed to build on previous economic development plans, which focused on agriculture (Thailand 1.0), light industry (2.0), and heavy industry (3.0). Expanding and leveraging Thailand’s thriving tech sector to help fuel growth and opportunity across the rest of the economy means digital identity isn’t just a nice-to-have—it’s an imperative.

Why Digital Transformation Requires Trusted Identity Proofing

Put simply, digital identity is the electronic representation of an individual’s credentials used for identity verification and proofing. Think of it as your passport, driver’s license, and bank card rolled into one secure, digitized framework verified by cross-referencing government-issued, physical world credentials. For individuals, using physical credentials to make purchases, manage finances, or receive entitlements in person is a relatively simple proposition. Doing the same in digital channels through authentication based on usernames and passwords is another thing entirely—one that has failed miserably.

Thanks to never-ending phishing attacks and corporate data breaches, the login credentials and personal identity files of billions of individuals worldwide have been compromised and made available to cybercriminals and threat actors on the Dark Web. In 2024 alone, nearly 3 billion people had their personal information stolen during a cyberattack targeting data broker National Public Data (NPD). This includes what some believe to be the Social Security Number for every US citizen. This past summer, a tranche of more than 10 billion login credentials were discovered in an online hacker forum.

Cyber thieves and other threat actors leverage this information to defraud individuals, businesses, and governments. They can siphon funds from bank accounts, apply for loans or credit cards, access government benefits, and more. They can also infiltrate corporate and government networks to breach data they can monetize downstream—sometimes with implications for critical infrastructure and national security. According to TransUnion, the number of successful data breaches jumped 15% last year. Worldwide, the price tag for such attacks is projected to top $9.5 trillion annually.

Unfortunately, that projection may prove naive. Today, new forms of AI increasingly enable threat actors of all stripes to enhance the effectiveness and scale of their operations. This is material in Southeast Asia, where dense populations and significant socioeconomic stratification make countries in the region prime targets for AI-enabled attacks. It also doesn’t help that Thailand has been home to what the FBI calls the world’s largest cybercrime network. But a growing number of governments here and around the world view digital identity as critical to mitigating these threats.

ThaID & Beyond: How Digital Identity Is Taking Shape in Thailand

The ability to facilitate fast, secure interactions and transactions is foundational to every digital economy, including Thailand’s. However, it requires a universally accepted form of identity proofing that protects privacy and prevents personal identity data from being stolen and exploited by others.

Compared to Belgium’s itsme, Singapore’s SingPass, or even India’s Aadhaar system, Thailand’s digital identity initiative is still in its early stages. But it’s catching up. The country’s focus on mobile-based identity verification, a key element of digital identity, is supported by its extensive 5G mobile broadband network—among the first deployed in Southeast Asia. The initiative also benefits from a tech-savvy citizenry. Fifty percent of the population is expected to have a mobile broadband subscription by 2025, while overall Internet penetration exceeds 88%.

Rather than developing a government-run digital identity system, however, Thai officials have opted to forge public-private partnerships within a digital identity ecosystem linking service and identity providers (IDPs). So far, some of the most prominent forms of digital identity include the following:

ThaID
Launched by the Department of Provincial Administration (DOPA) in 2023, the ThaID mobile app simplifies access to services requiring identity confirmation in both the public and private sectors. For example, ThaiID facilitates access to government services such as public health care, vehicle registration, and online tax payment without requiring additional data entry. NDID: The National Digital Identity Platform
This blockchain-based infrastructure is designed primarily to address digital Know Your Customer (KYC) mandates within banking and financial services. It’s intended to “enhance digital security to facilitate online transactions and enable wider access to banking and lending” via the user’s preferred mobile banking app. MNID: Mobile Network ID
Operated by participating telcos, the MNID system serves its mobile customers to facilitate identity verification and authentication.

These and other biometrics-based applications are designed to secure online transactions and prevent fraud. And they’re buoyed by regional collaborations like the ASEAN Digital Economy Framework, which seeks to standardize cross-border digital identity recognition. But there are hurdles. Unlike digital identity initiatives in Singapore and Estonia, where privacy concerns have been addressed through robust governance frameworks, Thailand’s initiative faces public trust issues and the fear of data misuse. Enhanced regulation and a surprising financial incentive may change that.

Tang Rat: Stimulus and a Step Toward Self-Sovereign Identity

One of the critical benefits of Thailand’s digital identity initiatives is convenience. Once registered, citizens don’t need to enter additional information when accessing services or manage multiple usernames and passwords—and biometric authentication adds an extra layer of security.

But a series of public sector data breaches, like the one that compromised the personal identity information of more than 55 million Thais earlier this year, threatens to erode trust in e-government initiatives like Thailand 4.0. Downloads of ThaiID and a new digital wallet within a super app called Tang Rat—which require submission of sensitive personal information such as the back of the national ID card and a unique set of codes for making digital transactions—have been tepid. Only 1 in 5 Internet users in Thailand have downloaded either of these apps. There’s no telling how many have uninstalled them.

Stepped-up regulatory mandates on data breaches and cross-border data sharing, and steep fines for non-compliance, are meant to stem concerns and incentivize stronger protections. Moreover, a significant benefit of digital wallets and their blockchain-based architectures is the use of globally unique identifiers that give users a cryptographically verifiable, decentralized digital identity. This approach sets the stage for self-sovereign identity (SSI), where authenticating users no longer requires personal data to be stored centrally on bank, government, or retail servers where it can be hacked. Instead, users can control what personal information they share, how it’s used, and for how long.

Then there’s that longer-term objective of Thailand 4.0. To accelerate adoption and help juice the economy, the Thai government is spending US$14 billion to preload digital wallets with US$300 in spending money for each person who downloads one.

What Should Come Next

This kind of incentive aside, I applaud Thailand’s digital identity initiative and the country’s embrace of digital wallets. In my view, digital identity’s success is predicated on distributed technologies and the architectural advantages they offer. This is especially crucial given the country’s ecosystem approach to digital identity. If deployed well, these technologies augur a day when someone applying for a car loan can choose which if any personal information to share, instead of opening their entire financial lives to a lender or dealer financing department.

It also means they could one day share third-party trust scores that allow them to demonstrate creditworthiness without revealing any personal information at all. Also promising: Thailand’s adoption of liveness tests during authentication of certain services.

But I do have one rather urgent piece of advice. To be most effective, the Thai government and its ecosystem partners would be wise to implement NIST-, FIDO2-, and ISO-type biometrics-based standards for its digital identity infrastructure and any associated liveness tests. Only then will they be able to defeat virtually any attempt at identity spoofing. And yes, if they were to seek my advice about the ideal setting for testing these technologies, my immediate response would be Phuket.

Interested in digital identity-based authentication but aren’t sure where to start? Learn more about 1Kosmos BlockID, the only NIST-, FIDO2-, and iBeta biometrics-certified digital identity platform—and schedule a free demo today.

The post Digital Identity Spotlight: Thailand appeared first on 1Kosmos.


IDnow

EUDI Wallets: Balancing privacy with usability.

Our Senior Architect, Sebastian Elfors recently participated in a panel discussion on the challenges of balancing privacy with usability when developing the EUDI Wallet. Here he shares his thoughts and concerns. As the co-author of the ETSI TR 119 476 ‘Analysis of selective disclosure and zero-knowledge proofs applied to Electronic Attestation of Attributes,’ I was recently […]
Our Senior Architect, Sebastian Elfors recently participated in a panel discussion on the challenges of balancing privacy with usability when developing the EUDI Wallet. Here he shares his thoughts and concerns.

As the co-author of the ETSI TR 119 476 ‘Analysis of selective disclosure and zero-knowledge proofs applied to Electronic Attestation of Attributes,’ I was recently invited to attend the ‘How far should privacy go? Privacy versus Usability’ panel discussion during October’s EU Digital Identity Wallets Forum in Spielfeld’s Digital Hub in the heart of Berlin. 

At the panel I was joined by panelists Steffen Schwalm, Principal Consultant at MSG, Mirko Mollik, Identity Architect at SPRIN-D, and Philippe Rixhon, Chair of the Management Board at Valunode OU; the hour-long panel was moderated by Michal Tabor, partner at Obserwatorium.biz. 

Throughout the lively and robust discussion, the panel debated and exchanged opinions on various matters, but there was one topic that panelists were in complete consensus early on: that user privacy would be essential when EUDI Wallets are rolled out across Europe in the coming years.  

The panel also agreed that the eIDAS 2.0 regulation contains the relevant articles and recitals that cater for mandatory selective disclosure and unlinkability when the EUDI Wallets are used to present electronic attributes. Simply put, the concept of selective disclosure allows a user to present a minimum of personal information to a verifier. The classic example is to prove that you are of legal drinking age when entering a bar, without revealing any more personal information than just your age. The principle of verifier unlinkability means that one or more verifiers cannot collude to determine if the selectively disclosed attributes describe the same identity subject. 

Assessing what has come before.

Earlier this year, I was appointed to co-author the European Telecommunications Standards Institute (ETSI) report ETSI TR 119 476, which provided a comprehensive overview of existing cryptographic schemes for selective disclosure, unlinkability and zero-knowledge proofs (ZKP). It also gives recommendations of data formats and protocols that are suitable for selective disclosure with the EUDI Wallet. 

 Similarly, the Architecture Reference and Framework (ARF) specifies the ISO mobile driving license (mDL) MSO and IETF SD-JWT VC as credential formats for selective disclosure, which are the same formats as proposed in the ETSI report. The ISO mDL MSO is a selective disclosure standard based on ‘salted hashes’ of attributes, which are CBOR encoded and signed by the issuer. Likewise, the SD-JWT also contains salted hashes of attributes, which are JSON encoded and signed by the issuer. As such, I believe the ETSI report and ARF are aligned with respect to credential formats. 

As the ISO mDL MSO and SD-JWT are digitally signed with cryptographic algorithms approved by SOG-IS (Senior Officials Group Information Systems Security), they can therefore be used by the EU public sector. The drawback is that ISO mDL MSOs and SD-JWTs must be issued batchwise to the EUDI Wallets to cater for verifier unlinkability, which adds an operational cost for the Qualified Trust Service Providers (QTSPs) and the PID Providers. 

There is, however, also an eIDAS 2.0 article that allows EU Member States to implement more innovative ZKPs on a voluntary basis. By using a ZKP scheme, the user can prove that a given statement is true, while not providing any additional information apart from the fact that the statement is true. 

 The more advanced ZKP schemes, such as BBS+ (named after its creators Boneh, Boyen, and Shacham) and zk-SNARKs (Zero-Knowledge Succinct Non-Interactive Argument of Knowledge), have the advantages of providing full unlinkability and dynamic predicates, without the additional cost of issuing batches of credentials. There are academic research projects, such as the Cinderella project, which have implemented zk-SNARKs to “pick out” certain elements of a classic X.509 certificate or an ICAO eMRTD (electronic Machine Readable Travel Document according to the International Civil Aviation Organization standard, such as electronic passports), and shared those attributes with a verifier. This approach is also getting some interest from ISO/IEC, which may apply it on a standard for selective disclosure of the ISO mDL attributes. 

Certainly, these ZKP schemes need to be standardized before being considered for the EUDI Wallet. The IETF (Internet Engineering Task Force) CFRG (Crypto Forum Research Group) and ISO/IEC (PWI 24843 and CD 27565) are in the process of standardizing BBS+, which may result in BBS+ being referenced by a future version of the ARF.

The challenges of building an EUDI Wallet ecosystem. 

Privacy is clearly a complex topic when it comes to the ZKP protocols and related standards that need to be considered for the EUDI Wallet. When it comes to building a complete EUDI Wallet ecosystem, there are even further complexities: 

The eIDAS2 Relying Parties will be registered for specific use cases.  The QTSPs can issue Qualified Electronic Attestations of Attributes (Q)EAAs) with embedded disclosure policies, which restricts the use of how the EUDI Wallets can share the (Q)EAAs with Relying Parties.  The EUDI Wallets will implement access control rights, according to a new CEN TC224 draft standard.  Last but not least, the users must give their consent to share the (Q)EAAs or PIDs with Relying Parties. 

All of this creates a significant user experience challenge for the EUDI Wallet ecosystem, which will require it to be designed and tested thoroughly. 

Of course, an important topic when it comes to the EUDI Wallet is transactions. The panelists exchanged ideas on how QTSPs will be able to invoice the Relying Parties for (Q)EAA transactions, in case the QTSP is not notified about how the EUDI Wallet is sharing the (Q)EAAs. In other words, how can a QTSP invoice the Relying Parties without knowing who they are? 

There are a few potential solutions to this problem. The first is to count and share each EUDI Wallet Provider’s aggregated and anonymized statistics with the QTSPs. A second option could be to insert payment terms in the (Q)EAAs with embedded disclosure policies, which the Relying Parties must accept before processing the (Q)EAAs. A third option could be to extend the OpenID for Verifiable Presentations (OID4VP) with parameters to check for agreements between the QTSPs and Relying Parties. The OID4VP protocol will be used by the EUDI Wallets for presenting PIDs and (Q)EAAs to the Relying Parties, so it could make sense to extend this protocol to make an a-priori “check” with the Relying Party that there is an agreement in place, prior to sharing the (Q)EAAs. 

Given the complexity of the EUDI Wallet ZKP protocols, the challenges in creating an ecosystem of QTSPs and Relying Parties that is also a viable business model, we agreed that discussions need to be ongoing. These topics should preferably be considered by the policy makers in the EU Commission DG-CNCT. The EUDI Large Scale Pilots, which are currently underway, should also be encouraged to test the complex scenarios described above. 

Considering how important the EUDI Wallet will be to identity management in Europe, it is fundamental for the entire eIDAS 2.0 community to resolve these issues prior to the EUDI Wallets being rolled out at scale in Europe the coming years.

By

Sebastian Elfors
Senior Architect
Connect with Sebastian on LinkedIn


Indicio

Biometrics and Verifiable Credential pioneer Indicio launches “Bring Your Own Biometrics” Verifiable Credential solution to solve biometric fraud

The post Biometrics and Verifiable Credential pioneer Indicio launches “Bring Your Own Biometrics” Verifiable Credential solution to solve biometric fraud appeared first on Indicio.
Indicio’s market-changing solution gives people control over their biometric data, removes the need for centralized storage, and solves the challenge of generative-AI identity fraud, all while delivering the simplicity, privacy, and security that everyone needs to feel confident in biometric authentication. No need to abandon biometric systems, BYOB-VC can be added as a layer for rapid digital transformation. 

Today, Indicio announces the launch of its groundbreaking solution to the risks and challenges of biometric authentication, BYOB-VC solution: Bring Your Own Biometrics using Verifiable Credentials.

BYOB-VC is a simple, easy-to-implement way for enterprises or governments to authenticate portable biometric data without having to store it.

Simply give people their biometrics in a Verifiable Credential (as part of an identity assurance process) and require them to present the biometric template in the VC (held in a digital wallet on their mobile device) when they do a liveness check. Verification software compares the live biometric with the authenticated biometric in the credential.

This radically simplifies biometric authentication — and provides a simple, intuitive, and powerful way to bypass the risk of AI-generated deepfakes.

BYOB-VC was developed by Indicio for pre-authorized travel and seamless border crossing and is in use in Digital Travel Credential solutions. Now, it is available in an easy-to-implement form for any organization reliant on biometrics for authentication and access management.

Global surveys show public are alarmed over biometric security and privacy

BYOB-VC addresses deep public concerns over biometric authentication. The recent International Air Transport Association (IATA) Global Passenger Survey 2024 found that a majority of airline passengers are worried about biometric data breaches and how their biometric data is being used.

A global consumer survey by mobile payment platform Jumio found that 72 percent of respondents are concerned on a daily basis that they may lose money or sensitive data to a deepfake.

And a 2024 survey by GetApp found that only 5 percent of consumers believed that their biometric data was secure.

Giving people control of their biometric data and the ability to consent to share that data, as BYOB-VC does, is a critical step to reassuring the public and governments over the safety of biometric processes. It meets the demands of the  European Union’s Data Protection Board, which stipulates that “individuals should have maximum control over their own biometric data.”

By combining a liveness check with the cryptographic, tamper-proof verifiability of Verifiable Credential technology, BYOB-VC is the most powerful multi factor authentication available for biometrics — and it can be enhanced to meet the most critical security requirements by easily combining other Verifiable Credentials — such as a government-issued ID — to the authentication process.

Benefits

Portable trust

You can prove the source of the Verifiable Credential and that the biometric data in the credential hasn’t been altered or faked. You can prove that the credential is bound to the person presenting it.

Bypasses generative AI deepfakes

Biometric authentication is a quick, two-step process: the person presenting themselves for a biometric scan also presents their authenticated biometric template in a Verifiable Credential from their digital wallet. Verification software compares the two and they have to match. There are multiple layers of biometrics, cryptography, and other security binding the credential to the wallet and the wallet to the device and the device to the person.

Faster, flexible, and simpler biometric management

No centralized biometric storage. BYOB-VC removes the complexity around biometric systems. There’s no need to worry about them going offline or protecting against data breaches — because there’s no data to access! Verification software is simple and mobile, allowing you to take advantage of portable, trustable biometric authentication.

Makes data privacy compliance much easier 

By enabling people to store their own biometric data you’ve not only solved the security risk of centralized storage, you’ve solved the compliance challenge of centralized storage and data minimization.

Addresses critical public concerns over biometric data
With generative AI being used in ever more elaborate scams, BYOB-VC provides robust reassurance, not only that their data can’t be stolen but that it can’t be used in ways they aren’t aware or approve of. The IATA Global Passenger Survey found that 39 percent of people would reconsider using biometrics if they were reassured about their privacy.

Why the future of biometric authentication needs to be decentralized

Biometrics have emerged as a powerful, frictionless way to authenticate identity. They are better than username and password-based authentication because they can’t be forgotten, don’t need to be reset, and — in the case of an iris — are unique to an individual.

But as biometrics have proliferated as a method to access systems, the upside of their uniqueness has revealed a precipitous downside. Biometrics need to be stored in a database so that the verifying party can compare the scan of a person presenting themselves for a biometric scan with a stored copy of their biometrics. If they match, the person is authenticated.

This centralized storage means they are at risk of being stolen in a data breach, and when this happens, people cannot reset their fingerprints, faces, or irises.

And if this wasn’t a big enough existential problem, the rapid rise of generative AI has made it astonishingly easy to convincingly fake biometric data.

Entrust Cybersecurity reported a 3000% increase in deepfake attempts between 2022 and 2023, while Deloitte’s Center for Financial Services is predicting AI-generated “fraud losses to reach US$40 billion in the United States by 2027, from US$12.3 billion in 2023, a compound annual growth rate of 32%.”

So far, typical responses  range from “be more vigilant about security” to “don’t post detailed pictures of yourself online,” to “we need an AI solution to detect AI fakes.”

So simple, so fast, so cost effective

BYOB-VC is a simple way around both wishful thinking and an AI arms race, as it leverages the revolution in decentralized digital identity. Here’s how it works.

When a person has their biometric data first scanned as part of identity assurance, the data is digitally signed and issued to them in a Verifiable Credential that they hold on their mobile device.

Verifiable Credentials have three powerful features:

1 The source of the credential can be proved using cryptography.

If someone tries to manipulate the data in a credential, they break the credential. The credential is cryptographically bound to the person and their device.

By rendering the biometric template taken during identity assurance in the form of a Verifiable Credential, any organization can authenticate it using simple verifier software. The source of the credential is authenticated, the integrity of the template data is authenticated, and finally, the template data is compared with the live biometric scan, all in one seamless process.

BYOB-VC also bypasses the problem of deepfakes. Rather than just rely on a still or moving image, or a voice, you also ask for cryptographic proof of that same data created by a trusted issuer. And if you need further proof, ask them to add other Verifiable Credentials to their presentation, multiplying the layers of cryptographic proof and credential binding.

In use by Indicio customers and now widely available

BYOB-VC was pioneered by Indicio for use in travel, where a passport’s biometric data is compared with a liveness check and then issued as a Verifiable Credential following the International Civil Aviation Organization’s standards for Digital Travel Credentials. This enables travelers to use a Verifiable Credential for pre-authorized travel and seamless border crossing. Acuity Market Research’s The Prism Project described our biometric solution as “masterful.”

Now, Indicio’s masterful approach and technology is available to any company, organization,  industry or sector that wants a simple, powerful solution to managing biometric authentication.

Learn more about Biometric Authentication through Verifiable Credentials on Indicio’s website, or if you have specific questions you can get in touch with our team of experts.

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post Biometrics and Verifiable Credential pioneer Indicio launches “Bring Your Own Biometrics” Verifiable Credential solution to solve biometric fraud appeared first on Indicio.


IDnow

Sealing the deal: IDnow Trust Services AB becomes Europe’s newest QTSP.

It’s official: IDnow Trust Services AB is now certified as a Qualified Trust Service Provider (QTSP) in the EU. We sat down with the Chief Executive Officer of IDnow Trust Services AB, Johannes Leser and Registration Officer of IDnow Trust Services AB, Uwe Pfizenmaier to learn more. In early 2024, IDnow began a joint venture […]
It’s official: IDnow Trust Services AB is now certified as a Qualified Trust Service Provider (QTSP) in the EU. We sat down with the Chief Executive Officer of IDnow Trust Services AB, Johannes Leser and Registration Officer of IDnow Trust Services AB, Uwe Pfizenmaier to learn more.

In early 2024, IDnow began a joint venture with system integrator and technology provider, ESYSCO to establish the newly formed QTSP, IDnow Trust Services AB. In October it was officially approved by PTS, the Swedish supervisory body, and is now listed as a QTSP on the eIDAS Dashboard by the European Commission.  

This significant milestone allows IDnow to offer a wide range of eIDAS-compliant digital signing solutions and ultimately offer trust services to our customers. For more information, check out our interview with Uwe and Johannes below. 

For those who may not be familiar with the term, what exactly is a QTSP?  

Johannes: A Qualified Trust Service Provider, or QTSP for short, is an entity that can create one or more trust services, such as electronic signatures, electronic time stamps, electronic seals or certificates in a qualified manner. What differentiates a QTSP from a Trust Service Provider is that it operates under stricter measurements and requirements as defined by the Electronic Identification and Trust Services (eIDAS), is independently assessed in regular audits by a conformity assessment body (CAB) and is required to have insurance due to reversed burden of proof in case of any disputes.  

Uwe: By using a QTSP, businesses benefit from an extra layer of security knowing that the products they choose are officially certified and audited by a higher authority. Although qualified trust services may or may not be required depending on the type of security an organization needs and the requirements of the country in which it operates, by choosing to do business with a QTSP, a higher level of confidence in security is achieved.  

What services will IDnow Trust Services AB offer? 

Johannes: As a QTSP, IDnow Trust Services AB can provide the following for now: 

Issue, validate and manage qualified electronic certificates for signatures and seals.  Deliver additional services such as qualified time stamps.  Persist identification evidence data.  Execute certificate revocation. Why will QTSPs be so important in the future of the digital signature market? 

Uwe: 72% of organizations in Europe still use a mix of paper and electronic documents. Despite this, the trend toward a fully digital signing process is just around the corner. In fact, the European digital signature market is predicted to be 7x times larger by 2030.  

As QTSPs are verified services under strict eIDAS regulations and requirements, they guarantee their customers a significant level of trust and security to adopt new solutions like digital signatures. Before becoming a QTSP, the entity is required to undergo rigorous and independent assessment as well as regular audits to ensure they remain compliant. As such, QTSPs offer greater legal certainty and higher security for electronic transactions and meet the same level of trust as paper documents. 

Expert guide to digital signatures. Download to discover: The different types of digital signatures Benefits of implementing a digital signature solution How IDnow can help unlock valuable business opportunities Read now What did the process of becoming a QTSP entail, specifically in relation to regulatory requirements? 

Johannes: To become a QTSP, a full understanding of the eIDAS regulation is crucial. eIDAS offers a uniform framework of guidelines to allow completely digital and legally secure cross-border contracts within the EU. It also defines the process and technology behind different types of services such as signatures, seals, time stamps, etc.  

Uwe: In order to qualify as a QTSP, the entity must ensure all legal and regulatory obligations are met, such as data protection and privacy requirements. Once established, the eIDAS assessment process is initiated with a CAB and an audit is carried out. After successfully passing the audit, a QTSP application is submitted with a supervisory body. Upon acceptance, the QTSP is published on the EU Trust List. 

What sets IDnow Trust Services AB apart from other QTSPs?  

Johannes: IDnow Trust Services AB is the first QTSP to offer SMS-free signing for digital contract signing. During the average digital signing process, users receive a One-Time Password (OTP) code that must be entered to authenticate the transaction. This step usually causes friction for users and companies, leading to 22% of drop offs coming from the OTP identification.  

SMS-free signing dramatically simplifies the signing process, eliminating the heavy-friction requirement of OTP codes and driving higher conversion rates. Plus, by eliminating the SMS step, fraud and operational risk is significantly reduced. 

What advantages does the creation of IDnow Trust Services AB offer to IDnow customers?  

Uwe: The combination of IDnow’s leading identity verification expertise and IDnow Trust Services AB’s advanced trust services will deliver unmatched value and secure, yet agile, solutions, including Qualified Electronic Signatures to future-proof businesses in a rapidly changing regulatory landscape. 

Key benefits include being able to easily navigate complex regulations like AMLD 6 and eIDAS. As electronic certificates are legally binding and dispute-protected, it can also help to reduce the risks of digital transactions in the EU. Plus, due to the Europe-wide validity of trust services, customers can now leverage IDnow’s pan-European approach to provide seamless, consistent services for cross-border growth.  

Johannes: By combining identity verification services with secure trust services, IDnow not only creates optimized processes, but offers unparalleled reliability and boosts confidence and trust in every transaction. 

As customers can perform identity verification and trust services from a single, unified and simplified process, they can benefit from streamlined procurement and contractual simplicity. 

What does the future look like for IDnow Trust Services AB? 

Uwe: As an eIDAS-certified QTSP, the outlook is very bright. The sky is the limit, especially regarding innovation. In the future, we hope to expand our product offerings and features as well as certifications.  

Johannes: In 2025, our plan is to equip more products with our QTSP features and explore new business use cases. Additionally, we plan on achieving another certification based on an audit that will support the forthcoming EUDI Wallet. Lastly, we plan to offer future-proof services such as QEAA (Qualified Electronic Attestation of Attributes) and advanced preservation solutions, all without sacrificing regulatory compliance. We are looking forward to the upcoming year and the many innovations we plan for our customers! 

Learn more about our range of digital signature solutions here

By

Kristen Walter
Jr. Content Marketing Manager
Connect with Kristen on LinkedIn


Ockto

Risicopartijen als partners in financiële innovatie

In streng gereguleerde marketen, zoals de financiële sector, zijn innovatie en flexibiliteit essentieel om concurrerend te blijven. Daarbij brengen strikte regelgeving en hoge compliance-eisen unieke uitdagingen met zich mee. Door vanaf het begin samen te werken met risicopartijen – zoals Legal, Compliance en Risk – kunnen organisaties de weg vrijmaken voor snellere en soepelere innovat

In streng gereguleerde marketen, zoals de financiële sector, zijn innovatie en flexibiliteit essentieel om concurrerend te blijven. Daarbij brengen strikte regelgeving en hoge compliance-eisen unieke uitdagingen met zich mee. Door vanaf het begin samen te werken met risicopartijen – zoals Legal, Compliance en Risk – kunnen organisaties de weg vrijmaken voor snellere en soepelere innovatietrajecten.


Innoveren in een zwaar gereguleerde sector - Jordy Stoelwinder & Hidde Koning - Data Sharing Podcast

In deze aflevering van de Data Sharing Podcast ontvangt host Hidde Koning Jordy Stoelwinder als gast. Jordy is werkzaam bij Vista Hypotheken als productmanager digitalisering en brondata. Eerder deed hij al ervaring op in dit gebied bij NHG en ING. Samen verdiepen zij zich in de uitdagingen rondom innovatie binnen een sterk gereguleerde sector als de hypotheeksector.

In deze aflevering van de Data Sharing Podcast ontvangt host Hidde Koning Jordy Stoelwinder als gast. Jordy is werkzaam bij Vista Hypotheken als productmanager digitalisering en brondata. Eerder deed hij al ervaring op in dit gebied bij NHG en ING. Samen verdiepen zij zich in de uitdagingen rondom innovatie binnen een sterk gereguleerde sector als de hypotheeksector.


IDnow

IDnow Trust Services AB certified as a Qualified Trust Service Provider in the European Union

IDnow Trust Services AB certified as a Qualified Trust Service Provider in the European Union Munich, November 5, 2024 – IDnow, a leading identity verification platform provider in Europe, announces its partnership with newly founded IDnow Trust Services AB, a certified Qualified Trust Service Provider (QTSP) under EU Regulation 910/2014 (eIDAS).[1] Founded as a joint […]
IDnow Trust Services AB certified as a Qualified Trust Service Provider in the European Union

Munich, November 5, 2024 – IDnow, a leading identity verification platform provider in Europe, announces its partnership with newly founded IDnow Trust Services AB, a certified Qualified Trust Service Provider (QTSP) under EU Regulation 910/2014 (eIDAS).[1] Founded as a joint venture in Stockholm in early 2024 between IDnow and ESYSCO, a system integrator and technology provider, the company offers qualified trust services, such as electronic signatures, time stamps, and seals, that combine security, compliance, and user convenience.

Innovation and leadership in the digital signature market

As a recognized QTSP in the EU by the Swedish supervisory body Post-och telestyrelsen (PTS), IDnow Trust Services AB will issue, validate, and manage electronic certificates and time stamps; capture additional information, such as qualified time; hold identification evidence data, and perform certificate revocation, while complying as a Certificate Authority (CA). The QTSP provides assurance of the existence of specific electronic data at a specific time, such as proof that documents have been submitted for processing.

One of the features that IDnow Trust Services AB will immediately enable for IDnow’s customers is SMS-free signing. This certified capability simplifies the signing process, eliminating the requirement of One-Time Password (OTP) codes and driving higher conversion rates. IDnow Trust Services AB is the first QTSP that will allow this new user authentication process, which is already acknowledged by different CEN and ETSI standards and which will revolutionize the user experience in the digital signature market.  

New joint venture secures trust and simplifies compliance

“We are incredibly pleased that our joint venture, IDnow Trust Services AB, is already bearing the fruits of our labor. At IDnow, we have long made it our mission to actively shape and lead the Know Your Customer and digital identity industry; we are now once again showing this leadership role by doubling down on trust services, as they are an essential part of the transformation of the digital identity market heralded by eIDAS 2.0”, says Andreas Bodczek, CEO of IDnow.

He continues: “In the coming years, our customers will benefit from the synergy of identity verification and qualified trust services, ensuring a compliant and efficient experience for all business-critical operations across the EU. This collaboration sets a new standard for trust and operational efficiency, positioning businesses for long-term success in the fast-evolving digital landscape”.  

Johannes Leser, CEO of IDnow Trust Services AB, adds: “Trust and liability is the backbone of all business, and it will be the driving force behind the global digital economy. IDnow Trust Services AB is committed to delivering innovative and highly dependable solutions to IDnow, its customers, and partners. With trust as our mutual foundation, we’re poised to revolutionize the European digital signature market, which is expected to be seven times larger by 2030 than it is today.”

[1] The electronic Identification and Signature (eIDAS) regulation defines a QTSP as a natural or a legal person who provides one or more qualified trust services.


Holochain

Mobile Holochain Applications Shipped!

Holochain in Your Hand

Volla has shipped their new Quintus smartphone with a Holochain-based app pre-installed.

I repeat, TL;DR: you can have a phone with a native Holochain app on it today.

That’s it. That’s the key takeaway of this article. Details below.

Volla Quintus

The Volla Quintus, a privacy-first smartphone, just began shipping and customers will be receiving their devices in the coming days. This phone runs both custom Android and Ubuntu Touch software, for a “Google-free” experience. Designed as an alternative to the surveillance-focused tech giants, Volla’s phones provide a realistic option for opt-out.

The Quintus is Volla’s most recent model, but they have been producing user-centered phones since 2020. They are dedicated to a distraction-free user experience, with interface tools like their Springboard which is a search-first launcher that allows you to interact with your applications without the overwhelming attention traps of the applications, notifications, and socials pushed on other platforms.

The App(s)

The Holochain-based Volla Messages is shipping with the Quintus in a beta version. While the front end of the app is relatively unremarkable (it’s a chat app where you can message your contacts one-on-one or create groups), the back end is something totally new. 

Volla Messages uses Holochain for its networking and data storage, bypassing the need for central servers. The smartphones are networked together into their own cloud, with your data encrypted and held amongst your peers. 

Volla Messages also uses Holochain’s membrane proof feature, limiting spam by requiring you to consent to join a particular chat that you’ve been added to. Web3 doesn’t need to mean a free-for-all; it can mean privacy, security, and agency. You should only be in the chats you want to be in.

In the coming month, Volla will be releasing a second Holochain-based app. Volla Recovery is a personal cloud app that allows you to backup the data from your phone without relying on a company’s servers, and without subjecting you to anyone’s data policies. Instead you can encrypt and backup your data among your peers, providing a seamless user experience alongside the privacy and security that your data deserves. 

Why did Volla Choose Holochain?

Volla built on Holochain because we provide scalable cloud-style apps without central servers. They are thinking about user privacy through the complete stack, from hardware, to software, to cloud services and edge computing. Holochain is a critical piece for this.

Here is what they have to say about it:

The big picture of Volla is a secure and independent communication infrastructure. A smartphone is an elementary component. The cloud is another important element. The only way to prevent external influence is distributed, highly encrypted edge computing.

—Dr. Jörg Wurzer, founder of Volla Phone
How to Access

Volla Messages isn’t exclusive to Volla devices. While the download links aren’t live yet, any Android user can download and use this software. After an upcoming revamp of the Volla website you should be able to download the beta Volla Messages app for your Android device. Eventually they’ll push the stable version of the app out to the major app stores so everyone can access this truly peer-to-peer tech.

Buy the Phone Now

Want a privacy-first phone? You can buy the Volla Quintus now. For those of you in the EU, use our discount code HOLOCHAIN10 on the Volla site. If you are outside the EU, then you can access a discount through this special link to their Indiegogo.

Monday, 04. November 2024

Ocean Protocol

Season 7 of the Ocean Zealy Community Campaign!

We’re happy to announce Season 7 of the Ocean Zealy Community Campaign, an initiative that has brought together our vibrant community and rewarded the most active and engaged members. 💰 Reward Pool 5,000 ($FET) tokens that will be rewarded to the Top100 users in our leaderboard 🚀 📜Program Structure Season 7 of the Ocean Zealy Community Campaign will feature more engaging tasks and ac

We’re happy to announce Season 7 of the Ocean Zealy Community Campaign, an initiative that has brought together our vibrant community and rewarded the most active and engaged members.

💰 Reward Pool

5,000 ($FET) tokens that will be rewarded to the Top100 users in our leaderboard 🚀

📜Program Structure

Season 7 of the Ocean Zealy Community Campaign will feature more engaging tasks and activities, providing participants with opportunities to earn points. From onboarding tasks to Twitter engagement and content creation, there’s something for everyone to get involved in and earn points and rewards along the way.

⏰Campaign Duration: 4th of November — 29th of November 12:00 PM UTC

🤔How Can You Participate?

Follow this link to join and earn:

https://zealy.io/cw/onceaprotocol/questboard

Season 7 of the Ocean Zealy Community Campaign! was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.

Sunday, 03. November 2024

KuppingerCole

Surviving the Cryptocalypse: Quantum Risks and Crypto Agility

In this episode, Matthias and Alexei explore the urgent need for organizations to prepare for the coming age of quantum computing and the potential risks it poses to current cryptographic standards. As quantum technology advances, traditional encryption methods may become vulnerable, putting critical data, transactions, and security at risk. Alexei discusses the concept of crypto agility—the abi

In this episode, Matthias and Alexei explore the urgent need for organizations to prepare for the coming age of quantum computing and the potential risks it poses to current cryptographic standards. As quantum technology advances, traditional encryption methods may become vulnerable, putting critical data, transactions, and security at risk.

Alexei discusses the concept of crypto agility—the ability to quickly adapt cryptographic infrastructure in response to new threats. He shares practical advice on how to assess and update legacy systems, encryption methods, and workflows, including:

Where organizations should begin if they rely heavily on cryptography for critical data and transactions How to evaluate and improve cryptographic infrastructure across digital systems, cloud environments, and hardware The essential role of vendor collaboration and supply chain security in building quantum-safe systems How to prioritize threats like ransomware and crypto-related risks based on industry needs

Alexei also underscores the importance of workforce training, advising that while employees don’t need deep cryptography knowledge, they must understand secure practices and tools approved by their organization’s security policy.



Saturday, 02. November 2024

Ontology

Ontology Monthly Report — October 2024

Ontology Monthly Report — October 2024 Empowering the community, celebrating achievements, and driving innovation. Introduction Ontology’s DIF Hackathon is still ongoing! Build with ONT ID and showcase your skills to the Ontology community. If you missed our latest workshop, catch up by listening to the recordings! Events and Partnerships 🤝 Glacier Community Update: We were thrilled to
Ontology Monthly Report — October 2024 Empowering the community, celebrating achievements, and driving innovation. Introduction

Ontology’s DIF Hackathon is still ongoing! Build with ONT ID and showcase your skills to the Ontology community. If you missed our latest workshop, catch up by listening to the recordings!

Events and Partnerships 🤝 Glacier Community Update: We were thrilled to have Glacier join our community update! Autumn Celebration with NOWChain: Did you participate in the Autumn celebration with NOWChain and Ontology? Thanks for joining us! Quest with LetsExchange: Our joint quest with LetsExchange is live — join in for exciting rewards! Rivalz Community Feature: We’re featuring Rivalz in our community updates, adding a fresh perspective to Ontology’s ecosystem. SubQuery Partnership: We’re proud to announce our latest partnership with SubQuery, strengthening our network and services. On-Chain Metrics 📊 Total Nodes: 880 nodes are currently active on the network. Staking Rate: 26.161% Total On-Chain Transactions: 19,609,673 transactions dApp Ecosystem: 177 DApps with a cumulative total of 7,797,562 transactions Community Engagement 💬 Privacy Hour: Our weekly Privacy Hour took place as usual, with insightful discussions about Web3 privacy practices. Community Updates: Regular updates kept everyone informed and engaged. Australia Node Halloween Event: Our Australia node hosted a Halloween event, adding a festive touch to the community! What’s Next? 🎯

Exciting developments and upcoming events to look forward to:

7th Anniversary Campaign: Stay tuned for our 7th anniversary celebrations — more details to come soon! 8. Follow Us 📱

Stay connected with Ontology by following us across our social media channels. Your engagement is key to our shared growth in the world of blockchain and decentralized technology.

Ontology website / ONTO website / OWallet (GitHub)

Twitter / Reddit / Facebook / LinkedIn / YouTube / NaverBlog / Forklog

Telegram Announcement / Telegram English / GitHubDiscord

Ontology Monthly Report — October 2024 was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.

Friday, 01. November 2024

Caribou Digital

To AI or Not to AI? Insights from the Biometrics Institute Congress, 2024

Keren Weitzberg & Aaron Martin AI is on everyone’s lips. So perhaps it’s not surprising that this year’s Biometrics Institute Congress was filled with much hand-wringing about AI. A self-described “non-profit,” the Biometrics Institute is probably best understood as part industry lobbying group, part think tank, part research institute. Each year, it holds a conference in London, which b
Keren Weitzberg & Aaron Martin

AI is on everyone’s lips. So perhaps it’s not surprising that this year’s Biometrics Institute Congress was filled with much hand-wringing about AI. A self-described “non-profit,” the Biometrics Institute is probably best understood as part industry lobbying group, part think tank, part research institute. Each year, it holds a conference in London, which brings together policymakers, vendors, regulators, privacy rights groups, and the occasional not-so-undercover academic (like ourselves).

Here are some of our key insights from this year’s congress:

Biometric vendors are currently debating how to define themselves vis-a-vis AI and how to engage with the current AI hype cycle

“AI” is a notoriously slippery term, so much so that scholars, civil society groups, and policymakers continue to argue over its very definition. Such debates are not purely semantic; rather, they shape how an industry is regulated, how it approaches funders and clients, how it is publicly understood, and how it understands itself. In their annual ‘state of the industry’ report, the Biometric Institute tackles this question, asking “To AI or not to AI”? (We would attempt to summarize the key points for blog readers but alas it is proprietary knowledge that is only available to paid members…) At the Congress, panelists and keynote speakers raised what seemed almost existential and ontological questions. One session, for example, was entitled “What is the relationship between AI and biometrics?”

From one perspective, the relationship between AI and biometrics may seem obvious: AI is expected to enhance the functionality of identity checks. Thanks to machine learning and artificial neural networks, biometric systems have become far more accurate and precise in recent years, improving their technical performance and increasing their spread and market share. More recently, generative AI is posing new risks and vulnerabilities for the sector, including sophisticated forms of synthetic identity fraud, such as face morphing, which has the potential to disrupt the security of the travel sector. OpenAI’s release of its real-time voice API, for example, has renewed alarms about fraudsters circumventing voice recognition software.

But AI is also part of a powerful tech and regulatory imaginary. At the Congress, AI seemed less a technology (or set of technologies) to be adopted than a loaded, polyvalent term to be contended with. Generating a kind of “hyperreality,” AI seems to be everything and nothing at once, invoking both dystopian and utopian futures, producing vociferous proponents, equally vocal detractors, and a growing (if sometimes quieter and more measured) group of skeptics.

The biometrics industry has long been anxious with its public reputation, particularly in the wake of a string of controversies over encoded racism within facial recognition algorithms and in light of mounting resistance to the use of biometric systems for policing, migration control, and state repression. A recent industry survey by the Biometrics Institute revealed concerns that public mistrust with AI would spill over into the sector, further inflaming public opinion: “A significant 80% of respondents believe public opinion on AI will directly impact their views on biometrics. This highlights the need to address public concerns about AI to build trust in biometric applications.” This is one of many reasons why vendors and clients may seek to distance themselves from the AI moniker, or at least carefully navigate how they relate to the capacious and ambiguous term.

Regulators and industry are not necessarily at odds with one another

The Biometrics Institute is a space of engagement — one where regulators and industry can speak productively and where regulators can make a case for compliance. This is in contrast to common understandings about the relationship between regulation and business, whereby the regulators are thought to be adversarial and mistrusting of industry operators.

Several representatives of governmental regulatory bodies were in attendance at this year’s Congress, including the UK’s Information Commissioner’s Office (ICO). Other key public stakeholders in attendance were the EU’s DG CNECT, which oversees the EU’s AI Act, and the Office of Privacy and Civil Liberties at the US Department of Justice. The tone struck in their presentations was one of cooperation and assistance. As John Edwards, UK Information Commissioner, told the audience, “we are on the same side.” Emphasizing that the UK is a space where biometric technologies can flourish, he argued that regulators can help industry: “We want you to be able to use biometric data that add value to society and protect people’s privacy.”

This is not necessarily a story of regulatory capture, but it does speak to the way that industry and regulatory bodies are actively shaping one another. It certainly reflects an increasingly accepted mode of regulation that emphasizes cooperation and highlights the benefits of technological innovation, potentially at the expense of fundamental rights protection.

Rather than restrict biometric use cases, AI regulations may, in the long run, facilitate (and legitimate) their spread

Rather than necessarily inhibiting the biometrics industry, AI regulation can help the sector identify and manage risk, benefiting corporate players. Take, for example, the EU AI Act, which has recently begun to come into force. Several presentations at the Congress were devoted to the new Act and its implications for industry. Irina Orssich, Head of Sector AI Policy at DG CNECT, explained that the Act takes a risk-based approach to biometrics. It divides use cases into a taxonomy of risk — from unacceptable to high-risk to limited to low/minimal risk. Compliance with the EU AI Act and adoption of this taxonomy can be seen as a form of risk mitigation — a means for companies to limit their liability and exposure in ways that will keep regulators at bay. Importantly, however, it also legitimates those applications that are deemed to be less risky according to the rules.

“Besides minimising risks,” notes legal scholar Nathalie Smuha, “regulation could facilitate AI’s uptake, boost legal certainty, and hence also contribute to advancing countries’ position in the…‘race to AI.’” The same could be said about biometrics and how emerging regulations will facilitate their further adoption and acceptance in different contexts, including consumer applications and more security-oriented spaces like borders. It is therefore incumbent upon critical voices to assess how regulators, and the rules they are mandated to enforce, further entrench biometrics in our everyday lives (whether or not they are ultimately understood to be AI) and the implications of this legitimization for our societies and polities.

To AI or Not to AI? Insights from the Biometrics Institute Congress, 2024 was originally published in Caribou Digital on Medium, where people are continuing the conversation by highlighting and responding to this story.


KuppingerCole

KuppingerCole Rising Stars: Spot the Most Innovative Companies in the Market

by Martin Kuppinger For two decades, KuppingerCole Analysts is monitoring the IAM, Digital Identity, and Cybersecurity market. With our Leadership Compass reports on market segments, the Buyer’s Compass providing insights and criteria for decision makers, or the Executive View reports on individual solutions, we already have a range of reports covering the markets in breadth and depth.  No

by Martin Kuppinger

For two decades, KuppingerCole Analysts is monitoring the IAM, Digital Identity, and Cybersecurity market. With our Leadership Compass reports on market segments, the Buyer’s Compass providing insights and criteria for decision makers, or the Executive View reports on individual solutions, we already have a range of reports covering the markets in breadth and depth. 

Now, there is another member of that family of publications: KuppingerCole Rising Stars.

This new report is devoted to innovative vendors, commonly in the startup stage from early startups with initial customers to companies that already have demonstrated their growth potential. 

We talk with hundreds of startups and emerging vendors every year. Now, we will rate them based on a defined set of criteria, with innovativeness and product-market-fit being most relevant, but also looking at their management, organizational structure, and other factors. 

Vendors that pass our defined, ambitious thresholds will earn the “KuppingerCole Rising Star” rating for their strong market potential. 

We believe these reports will be of value for both decision makers in end-user organizations and financial investors, shining a spotlight on vendors that are not yet broadly known and visible but should be observed. For end-user organizations, they might fill gaps or even become an alternative to established vendors. For investors, they are logical targets. 

Stay tuned! 


SelfKey

SingularityDAO Approves Merger With Cogito Finance and SelfKey Following SDAO Community Vote

Gros Islet, St. Lucia, 1st November 2024 - SingularityDAO has concluded a community vote to determine its proposed merger with Cogito Finance and SelfKey. SDAO holders voted overwhelmingly in favour of the merger, enabling SingularityDAO to press ahead with plans to form Singularity Finance, an EVM Layer 2 for tokenising the AI economy.

Gros Islet, St. Lucia, 1st November 2024 - SingularityDAO has concluded a community vote to determine its proposed merger with Cogito Finance and SelfKey. SDAO holders voted overwhelmingly in favour of the merger, enabling SingularityDAO to press ahead with plans to form Singularity Finance, an EVM Layer 2 for tokenising the AI economy.


Tokeny Solutions

Tokeny’s Talent | Jordi’s Story

The post Tokeny’s Talent | Jordi’s Story appeared first on Tokeny.
Jordi Reig is Head of Engineering at Tokeny.  Tell us about yourself!

I’m Jordi Reig and I’m living in a small town near Girona, about 100 km from Barcelona, enjoying nature, tranquility and the simple pleasures of life. I studied Computer Science at university and also hold an MBA. I’ve been working in the technology field for over 20 years, with several of those years in management roles. While I have many hobbies, I particularly enjoy playing football, mountain running and watching sci-fi movies.

What were you doing before Tokeny and what inspired you to join the team?

Well, I was doing something similar at another company, working to create the best possible conditions -whether by increasing well-being, enhancing team dynamics and processes, or improving deliveries- for the teams I managed to thrive.

How would you describe working at Tokeny?

Challenging but rewarding, with open communication, freedom to express opinions and a great environment to develop your abilities.

What are you most passionate about in life?

Enjoy life as much as I can. Life is short, so make the most of it!

What is your ultimate dream?

Live more, work less 😅. And I wish I could catch a glimpse about our world a thousand years from now.

What advice would you give to future Tokeny employees?

Get ready for what’s coming in the world of tokenization because it’s going to be amazing!

What gets you excited about Tokeny’s future?

The countless possibilities our solution can bring to institutions and society, along with the growth it will drive for the company, are immense.

He prefers: check

Coffee

Tea

check

Movie

Book

Work from the office

check

Work from home

Dogs

check

Cats

check

Call

check

Text

check

Burger

Salad

check

Mountains

Ocean

check

Wine

Beer

check

Countryside

City

check

Slack

Emails

check

Casual

Formal

check

Crypto

check

Fiat

Night

check

Morning

More Stories  Tokeny’s Talent|Laurie’s Story 26 January 2023 Tokeny’s Talent|Eva’s Story 19 February 2021 Tokeny’s Talent|Alexis’ Story 26 October 2022 Tokeny’s Talent | Tiago 27 July 2023 Tokeny’s Talent|Tony’s Story 18 November 2021 Tokeny’s Talent|Radka’s Story 4 May 2022 Tokeny’s Talent|Héctor’s Story 29 July 2022 Tokeny’s Talent | Liam 30 March 2023 Tokeny’s Talent|Barbora’s Story 28 May 2021 Tokeny’s Talent | Cristian 13 June 2024 Join Tokeny’s Family We are looking for talents to join us, you can find the opening positions by clicking the button. Available Positions

The post Tokeny’s Talent | Jordi’s Story appeared first on Tokeny.


KuppingerCole

Identity and Access Governance

by Nitish Deshpande This report provides an overview of the Identity and Access Governance market and a compass to help you find a solution that best meets your needs. It examines solutions that provide an integrated set of access governance capabilities for on-premises and SaaS systems. The report provides an assessment of the capabilities of these solutions to meet the needs of all organizations

by Nitish Deshpande

This report provides an overview of the Identity and Access Governance market and a compass to help you find a solution that best meets your needs. It examines solutions that provide an integrated set of access governance capabilities for on-premises and SaaS systems. The report provides an assessment of the capabilities of these solutions to meet the needs of all organizations to monitor, assess, and manage access-related risks such as over-entitlements and SoD (Segregation of Duties) conflicts.

Identity and Access Governance

by Nitish Deshpande Explore Identity and Access Governance solutions for tackling regulatory compliance, security risks, and managing complex hybrid environments efficiently. Learn more about how to select the solution that is right for you in our buyer's guide.

by Nitish Deshpande

Explore Identity and Access Governance solutions for tackling regulatory compliance, security risks, and managing complex hybrid environments efficiently. Learn more about how to select the solution that is right for you in our buyer's guide.

Finema

This Month in Digital Identity — November Edition

This Month in Digital Identity — November Edition Welcome to the November edition of our monthly digital identity series! This month, we’re diving into essential advancements shaping digital identity and the future of secure verification. Discover key updates on the European Digital Identity Wallet, the latest approaches to mobile driver’s license verification, and how deepfake detection is
This Month in Digital Identity — November Edition

Welcome to the November edition of our monthly digital identity series! This month, we’re diving into essential advancements shaping digital identity and the future of secure verification. Discover key updates on the European Digital Identity Wallet, the latest approaches to mobile driver’s license verification, and how deepfake detection is evolving to tackle growing threats. Plus, we’ll explore Jumio’s innovative biometric liveness detection and its role in combating identity fraud.

Here’s a closer look at what you’ll find in this month’s insights:

The EUDI Wallet

The European Digital Identity (EUDI) Wallet is a transformative initiative designed to provide EU citizens with a secure, self-sovereign digital identity solution. This wallet enables individuals to manage their identity information independently, ensuring privacy and security in online interactions. By facilitating access to essential services such as banking, healthcare, and governmental applications, the EUDI Wallet promises to streamline daily life and foster a more inclusive digital economy.

However, its implementation faces several significant challenges. Firstly, robust cybersecurity measures are crucial to prevent identity theft and data breaches, which could undermine user trust. Secondly, achieving regulatory harmonization across diverse EU member states is essential, as different countries have unique legal frameworks and privacy regulations. Without a unified approach, the wallet’s effectiveness could be compromised, leading to confusion among users and service providers alike.

Collaboration among various stakeholders—including government agencies, technology providers, and civil society—will be pivotal in overcoming these hurdles. By establishing clear standards for security, interoperability, and user experience, the EUDI Wallet can become a reliable tool that empowers citizens while safeguarding their data. Ultimately, its success will hinge on building public trust and ensuring that the system is user-friendly, accessible, and compliant with the highest privacy standards.

mDL Verification

The emergence of mobile driver’s licenses (mDLs) signifies a significant shift in identity verification, presenting both opportunities and challenges for users and authorities. mDLs offer a modern alternative to traditional physical licenses, allowing users to carry their identification securely on their smartphones. This technological advancement aims to streamline the verification process for a wide range of services, from travel to online transactions.

However, the implementation of mDLs is fraught with challenges. One primary concern is the need for secure and intuitive verification processes that maintain user confidence while preventing identity fraud. Additionally, the lack of uniformity in regulations across different states poses a significant barrier to widespread adoption. Each state has its own legal standards and technical requirements, complicating interoperability and making it difficult for users to rely on their mDLs outside their home jurisdictions.

Privacy is another critical issue, as users must be assured that their personal information will remain secure and confidential. Striking a balance between robust security measures and a seamless user experience is essential for gaining public acceptance.

To facilitate the successful rollout of mDLs, continuous collaboration among stakeholders — including government agencies, technology developers, and consumers — is vital. By establishing best practices and regulatory frameworks, stakeholders can ensure that mDLs become a trusted and widely accepted form of identification, paving the way for a more secure and efficient digital identity landscape.

Deepfake Detection

The rise of deepfake technology presents significant challenges to the authenticity of digital content, raising concerns about misinformation and trust in media. Deepfakes utilize advanced artificial intelligence to create highly realistic but fabricated videos and audio, making it increasingly difficult for viewers to discern truth from deception. As this technology becomes more sophisticated, the need for effective detection methods becomes paramount.

Various techniques are being developed to identify deepfakes, focusing on detecting subtle inconsistencies that can indicate manipulation. These methods include analyzing facial movements, lighting discrepancies, and unnatural expressions, which may signal that a video has been altered. As detection technologies evolve, they must continuously adapt to keep pace with advances in deepfake creation.

A multi-pronged approach is essential to mitigate the risks associated with deepfakes. This involves not only developing robust technological solutions but also enhancing public awareness about the existence and implications of deepfakes. Educating consumers on how to recognize manipulated content is critical in fostering a more discerning audience that can critically evaluate the media they consume.

Regulatory measures also play a crucial role in addressing the challenges posed by deepfakes. Policymakers must consider ethical guidelines and legal frameworks that govern the creation and dissemination of synthetic media. By promoting transparency and accountability in digital content creation, society can better safeguard against the potential harms of deepfakes while preserving the integrity of visual communication.

Jumio’s Biometric Liveness Detection

Jumio’s development of in-house biometric liveness detection technology represents a significant leap forward in identity verification. As identity fraud becomes more prevalent, this innovative solution uses sophisticated artificial intelligence to accurately differentiate between genuine biometric data—such as facial recognition—and spoofing attempts, including photographs or masks. This capability is crucial for enhancing security in online transactions and customer onboarding processes.

The liveness detection technology analyzes various data points in real time, assessing factors such as facial movements and eye interactions to determine whether the biometric input is from a live person. By integrating this technology into their identity verification offerings, Jumio aims to provide organizations with a more reliable means of preventing identity theft and fraud.

Moreover, as the digital landscape evolves, the demand for effective biometric verification solutions is increasing. Organizations must navigate the dual challenge of enhancing security while ensuring a smooth user experience. Jumio’s biometric liveness detection addresses this need, positioning the company as a leader in the identity verification market.

In an environment where digital interactions are ubiquitous, establishing trust in online transactions is paramount. By offering advanced biometric solutions, Jumio is helping to build confidence among consumers and organizations alike, making it easier to engage in secure digital commerce. As identity verification technologies continue to evolve, Jumio’s innovations will play a vital role in shaping the future of secure online interactions.

We look forward to bringing you more insightful updates as we continue to explore the latest trends and innovations in the field of digital identity. Together, we can contribute to a more secure and inclusive digital future.

This Month in Digital Identity — November Edition was originally published in Finema on Medium, where people are continuing the conversation by highlighting and responding to this story.


PingTalk

How to Reduce Cart Abandonment and Cultivate Customer Loyalty

Learn how identity can help retailers reduce cart abandonment, increase conversions, grow CLV, and turn casual browsers into brand advocates.

Thursday, 31. October 2024

auth0

Security Considerations in the Time of AI for Startups

Perspectives of a Builder, a Buyer, and a Founder
Perspectives of a Builder, a Buyer, and a Founder

Indicio

Indicio co-developed Digital Farm Wallet wins Constellation Research SuperNova Award

The post Indicio co-developed Digital Farm Wallet wins Constellation Research SuperNova Award appeared first on Indicio.
Trust Alliance New Zealand’s Digital Farm Wallet, co-developed by Indicio and Anonyme Labs, wins “Digital Safety, Governance, Privacy, and Cybersecurity” award. This is the second time Indicio’s Verifiable Credential technology has won a SuperNova Award.

Trust Alliance New Zealand (TANZ), a non-profit industry consortium representing farmers and other agriculture-chain stakeholders, has won one of the most prestigious awards in technology, a Constellation SuperNova Award. The awards recognize business transformation, and TANZ won in the category of Digital Safety, Governance, Privacy, and Cybersecurity.

TANZ’s Digital Farm Wallet uses Verifiable Credentials to create a data-sharing ecosystem for producers, growers, exporters, retailers & consumers. This allows farmers to hold data around emissions, land, water usage, and be able to share it directly and securely with reliant parties, removing the need for large databases or the need to manually keep sending the same information over and over again.

This project and technology is transformational for anyone working in agriculture. The ability to directly share authenticated information between farmers and distributors in a secure, privacy-preserving way realized significant cost benefits in the trial. Farmers spent less time on form-filling and data management and more time farming. In the future, it represents a more streamlined, transparent supply chain, one where farmers have the ability to not only promise organic or green growing practices, but where the consumers can verify the claims for themselves using credentials at the time of purchase.

If you would like to learn more about the project and see the technology in action you can watch an in-depth video here.

This is the second time an Indicio customer has won a SuperNova Award. The first was in 2022 for technology that went on to support the launch of Digital Travel Credentials.

“Digital Agriculture is ripe for revolution using decentralized identity. We’ve shown that the Digital Farm Wallet delivers real, tangible economic benefits to farmers, and with our technology and the depth of its capabilities and features, we see Verifiable Credentials as being a game-changer in the global agricultural value chain,”

said Indicio CEO Heather Dahl. “Land, sea, air, public sector and private sector — there is nowhere our solutions aren’t driving digital transformation, and that’s because our technology makes sharing information smoother, faster, and more secure,”

To learn more about the Indicio Digital Farm Wallet and its features, get in touch with our team of experts and we’d be happy to discuss your specific needs

To learn more about Indicio’s platform for decentralized identity solutions, see our page on Indicio Proven, or read the Beginner’s Guide to Decentralized Identity.

###

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post Indicio co-developed Digital Farm Wallet wins Constellation Research SuperNova Award appeared first on Indicio.


KuppingerCole

Nov 26, 2024: 2024 PAM Market Insights & Vendor Analysis

Join us for a comprehensive webinar on the 2024 Leadership Compass for Privileged Access Management (PAM), where we’ll unpack the latest insights and vendor evaluations shaping the PAM landscape. Discover which vendors lead the market in innovation, product strength, and scalability, and explore emerging capabilities like Just-in-Time (JIT) access and Cloud Infrastructure Entitlement Management (CI
Join us for a comprehensive webinar on the 2024 Leadership Compass for Privileged Access Management (PAM), where we’ll unpack the latest insights and vendor evaluations shaping the PAM landscape. Discover which vendors lead the market in innovation, product strength, and scalability, and explore emerging capabilities like Just-in-Time (JIT) access and Cloud Infrastructure Entitlement Management (CIEM). Gain a deeper understanding of how PAM solutions can secure critical assets across multi-cloud and on-premises environments and learn best practices for selecting a solution that aligns with your organization’s security and compliance needs.

From Stress to Security: Building a Focused, Resilient Workforce

by Jasmine Eskenzi In today’s fast-paced digital landscape, distractions and cognitive overload are some of the primary reasons people fall victim to cyber threats like phishing and social engineering attacks. With multitasking and constant digital connectivity, employees are more susceptible to these tactics, exposing organizations to increased cybersecurity risks. Jasmine Eskenzi, Co-Founder an

by Jasmine Eskenzi

In today’s fast-paced digital landscape, distractions and cognitive overload are some of the primary reasons people fall victim to cyber threats like phishing and social engineering attacks. With multitasking and constant digital connectivity, employees are more susceptible to these tactics, exposing organizations to increased cybersecurity risks. Jasmine Eskenzi, Co-Founder and CEO of The Zensory, will address this critical issue at cyberevolution 2024, where she’ll discuss how mental clarity and mindfulness practices can become essential parts of a robust cybersecurity strategy.

In her session, Jasmine will present research highlighting the connection between everyday stressors and increased vulnerability to cyberattacks. She will also share how organizations can integrate mindfulness and focus-building exercises into their security programs, helping employees remain attentive and resilient against cyber threats. Attendees will learn actionable steps to foster a cybersecurity culture that empowers individuals to stay alert, mentally clear, and proactive in identifying risks, while building a supportive, people-first approach to digital security.

Watch our interview with Jasmine to get a glimpse of her unique insights on blending mindfulness with cybersecurity, and how this approach can address both human vulnerabilities and technical challenges in today’s complex threat landscape.


Ocean Protocol

DF113 Completes and DF114 Launches

Predictoor DF113 rewards available. DF114 runs Oct 31 — Nov 7th, 2024 1. Overview Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor. Data Farming Round 113 (DF113) has completed. DF114 is live today, Oct 31. It concludes on November 7th. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE&nbs
Predictoor DF113 rewards available. DF114 runs Oct 31 — Nov 7th, 2024 1. Overview

Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor.

Data Farming Round 113 (DF113) has completed.

DF114 is live today, Oct 31. It concludes on November 7th. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF113 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF114

Budget. Predictoor DF: 37.5K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and Predictoor

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF113 Completes and DF114 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wednesday, 30. October 2024

Northern Block

Northern Block Pilots Trust Registry with IATA and Air Travel Partners

Northern Block Pilots Trust Registry with IATA and Air Travel Partners in Fully Digital Air Travel Experience The post Northern Block Pilots Trust Registry with IATA and Air Travel Partners appeared first on Northern Block | Self Sovereign Identity Solution Provider. The post Northern Block Pilots Trust Registry with IATA and Air Travel Partners appeared first on Northern Block | Self Sovereig


[30 October 2024, Bangkok]

Northern Block, in collaboration with the International Air Transport Association (IATA) and key industry partners, has demonstrated that the future of air travel is digital. In a groundbreaking proof-of-concept (PoC), two passengers completed a fully digital, round-trip journey between Hong Kong and Tokyo on October 21 and 22, using digital wallets and travel credentials to navigate airport processes seamlessly.

We’ve achieved something unprecedented here, extending beyond technical trust to build an ecosystem that is secure, reliable, and user-friendly at every stage,” said Mathieu Glaude, CEO of Northern Block.

This PoC demonstrated an unparalleled level of interoperability, bringing together over five technology solution providers and supporting more than five distinct types of digital credentials. Two different digital wallets offered an excellent user experience, while the inclusion of a trust registry provided assurance that each credential came from an authorized issuer, reinforcing trust across the ecosystem.

Highlighting Northern Block’s Role in Credential Verification and Trust Registries

Northern Block played a pivotal role by providing Cathay Pacific with its Orbit Enterprise Credentialing API for credential verification, along with an Orbit Trust Registry instance that allowed verifiers to confirm issuer authority, ensuring credential legitimacy throughout the passenger journey. Trust registries are foundational in communicating credential issuer conformance against an ecosystem governance framework, enhancing the integrity of each interaction. By verifying that every credential originates from an authorized issuer, the trust registry establishes a layer of trust that goes beyond technical requirements, embedding governance principles at every step.

Standards-Based Interoperability: Key to a Seamless Experience

This PoC demonstrated technical standards critical to enabling credential exchange and trust registry functionality. By aligning with IATA’s Technical Interoperability Profile, Northern Block and other vendors adhered to credential exchange protocols like OpenID for Verifiable Credentials, credential formats such as SD-JWT VC, Decentralized Identifiers (DIDs) such as did:web, and more. This alignment ensured that diverse solutions could operate seamlessly in a live, international airport setting.

For the trust registry, the Trust over IP’s Trust Registry Query Protocol (TRQP) allowed verifiers to confirm issuer statuses quickly and reliably, supporting real-time decision-making and building confidence across the open ecosystem.

Highlights of the Proof-of-Concept

Digital Identity and Biometrics: A Fully Digital Travel Experience
Biometric verification paired with digital credentials allowed travelers to navigate airport processes, including check-in and boarding, without presenting physical documents. Credential Verification and Interoperability
Northern Block’s Credential Exchange API enabled Cathay Pacific to verify credentials, and industry-standard VCs, such as boarding passes and visa credentials, were integrated seamlessly using multiple vendor solutions. Trusted Issuer Verification with Trust Registry
The trust registry verified each credential’s issuer authority, ensuring the integrity and trustworthiness of credentials from multiple issuers. By leveraging standards, this PoC demonstrates that digital credentials can be trusted and accepted across jurisdictions. IATA’s Open API Hub: A Gateway for Digital Travel
Now accessible in IATA’s Open API Hub, Northern Block’s Credential Exchange API offers air travel stakeholders the infrastructure to support digital credentials and enhance the travel experience.

Moving Forward

As we see more adoption of digital credentials, trust registries are becoming a foundational trust establishment infrastructure. They will help to enhance the travel journey, and enable value creation in retailing, service delivery, and across the whole partner value chain. We invite you to join us in future projects as we continue to push towards digital transformation of the airline industry together.

 

The post Northern Block Pilots Trust Registry with IATA and Air Travel Partners appeared first on Northern Block | Self Sovereign Identity Solution Provider.

The post Northern Block Pilots Trust Registry with IATA and Air Travel Partners appeared first on Northern Block | Self Sovereign Identity Solution Provider.


auth0

Streamline Account Provisioning and Management with SCIM

Automate user lifecycle, enhance security and boost IT efficiency across your enterprise
Automate user lifecycle, enhance security and boost IT efficiency across your enterprise

TBD on Dev.to

How Decentralized Apps Can Make Everyday Tasks Easy

Whenever I explore technology that's new to me, whether it be learning how decentralized apps work or what an open source tool does, it's seeing the technology in action that helps me understand whether or not it can impact me and my every day life. This is why every month at TBD, what better way to learn about our technology in action than from innovators using TBD's te





Whenever I explore technology that's new to me, whether it be learning how decentralized apps work or what an open source tool does, it's seeing the technology in action that helps me understand whether or not it can impact me and my every day life. This is why every month at TBD, what better way to learn about our technology in action than from innovators using TBD's technologies today.

As our open source projects continue to develop, our community members contribute to the global effort of decentralizing the web with their independent projects. Here are the latest contributions from them.

Ariton

Developed by Sondre Bjellås (@sondreb), Ariton is a Web5 community SuperApp. It acts as a decentralized platform for building and managing communities! Ariton runs on any device with the ability to add any Mini Apps (or features) you want, like chat, groups, events, notes and more. Built on free and open standards, your identity and data is always in your full control. Currently in prototype stage, you can learn more and try it out here.

Kin AI

Kin AI is a personalized Web5 AI companion that offers guidance, coaching, and emotional support! Kin helps you piece together your problems and how to solve them in a way that seamlessly fits how you want it to. All your data stays on your device, and no one can access it without your specific permission. Live in beta, you can get early access on the App or Play store.

BlockCore Wallet

Also developed by Sondre (mentioned above), BlockCore Wallet is a non-custodial Web5 wallet in your browser that supports DIDs (decentralized identifiers), tokens, crypto currencies and more! You can add different accounts, send/receive payments, and even use an address book to quickly send multiple payments to one contact. You can learn more and try it out yourself in the BlockCore Wallet Guide.

Share Your Open Source Project

Amazing projects, right? Really helps visualize how decentralized apps can bring ownership and value to your everyday life in ways you may not have imagined.

Have a cool open source project that incorporates TBD's decentralized technologies? We'd love to hear about it! Head over and share your work with us in Discord in our #share-what-you-do channel for a chance to have your project featured on our dev site.


Elliptic

Crypto regulatory affairs: Hong Kong plans to create panel for licensed crypto exchanges to facilitate regulatory consultation

Regulators and policymakers in Hong Kong have offered further indication of forthcoming initiatives and priorities that could help to solidify Hong Kong’s status as the leading cryptoasset hub in the Asia-Pacific region. 

Regulators and policymakers in Hong Kong have offered further indication of forthcoming initiatives and priorities that could help to solidify Hong Kong’s status as the leading cryptoasset hub in the Asia-Pacific region. 


KuppingerCole

Enabling Smart Business Processes: Orchestrating Digital Identities & Signing

by Martin Kuppinger Identity orchestration enables organizations to build flexible, adaptive user journeys that can adapt to the ever-changing requirements of modern organizations. Many use cases require advanced capabilities such as integrated identity verification, document verification, or qualified electronic signing to serve the security, risk management, and regulatory requirements of organi

by Martin Kuppinger

Identity orchestration enables organizations to build flexible, adaptive user journeys that can adapt to the ever-changing requirements of modern organizations. Many use cases require advanced capabilities such as integrated identity verification, document verification, or qualified electronic signing to serve the security, risk management, and regulatory requirements of organizations. This whitepaper provides insight into what organizations should look for when selecting solutions for modernizing their user journeys. It also puts a spotlight on the Xayone platform as one solution serving this market.

HP Wolf Pro Security

by John Tolbert This KuppingerCole Executive View report looks at the field of Unified Endpoint Management, threats to endpoints, and need for PC management and security. A technical review of HP Wolf Pro Security is included.

by John Tolbert

This KuppingerCole Executive View report looks at the field of Unified Endpoint Management, threats to endpoints, and need for PC management and security. A technical review of HP Wolf Pro Security is included.

Tuesday, 29. October 2024

TBD on Dev.to

Why Broken Links Are Costing You Brand Deals (And How to Fix It)

Have you ever watched a creator’s video and thought, "Where did she get that top?" or "I need that protein powder"? You scroll through the comments, only to see the infamous "link in my bio" comment. You rush to click the link, and you're hit with-page not found 😒. I remember once being so desperate that I took a screenshot of the item and reverse-searched it on Google Images. I found something si

Have you ever watched a creator’s video and thought, "Where did she get that top?" or "I need that protein powder"? You scroll through the comments, only to see the infamous "link in my bio" comment. You rush to click the link, and you're hit with-page not found 😒. I remember once being so desperate that I took a screenshot of the item and reverse-searched it on Google Images. I found something similar but not what I wanted. SO frustrating. Eventually, I gave up and kept on scrolling.

Now, imagine how many potential sales that creator lost because a third-party platform’s server was down. Their metrics won't even reflect those missed opportunities, making it harder to secure brand deals. Who actually has time for that? That’s when I realized I could use Decentralized Identifiers (DIDs) to create my own decentralized link hub utilizing service endpoints. With this setup, all my links and contact info are stored in one place—owned and controlled by me. Even if a service that houses all my links goes down, my links will always be accessible because they’re not reliant on any external platforms to display them. I’m sharing this in hopes that fellow creators won’t miss out on potential brand deals, and I won't have to cry over a top I never got to buy.

Before I show you exactly how you can create your own decentralized link hub, lets answer some of the questions you're probably asking yourself.

What are Decentralized Identifiers (DIDs)?

So, what exactly is a Decentralized Identifier, or DID? Think of it as your username—the one source of truth for everything you do online—except this one is owned and controlled entirely by you. It’s a unique "address", thats verifiable and doesn’t rely on any central authority like Facebook, Google, or any other service. Instead, DIDs give you the freedom to manage your own identity online, without needing to trust a single platform to store or validate your information.

In the context of a decentralized link hub, your DID becomes the hub for all your important links. It’s not tied to any third-party service, which means you never have to worry about followers scrolling simply because your link page isn't working. When you update your links, you only need to do it once, as they're tied to your DID—so they stay consistent across all your social platforms, giving you full control. When you update your links, they stay up-to-date across the web because again they’re tied to your DID—giving you full control.

How are Service Endpoints going to help me?

Now, let’s cover what service endpoints are. These might sound technical, but they’re actually pretty simple—think of them like your digital address/phone book. Remember those huge yellow books you used to sit on at the hair salon? They were filled with phone numbers and addresses, making it easy to find and contact people. Well, service endpoints are kind of like that, except they’re the digital "addresses" for different parts of your online identity. These could be links to your Instagram profile, website, direct messages, or even your affiliate links.

These endpoints live in your DID document. So instead of relying on centralized services like Linktree, your DID acts as the home for all your important links. So when someone resolves your DID, they can access the service endpoints that you’ve decided to share.

You can also easily update and delete these links anytime you need to again without relying on any third-party platform to keep those connections working.

The fix: let's create a decentralized Link Hub

If you’re more of a visual learner, check out my YouTube short where I show you exactly how. For this example we're going to create a DID with two service endpoints. One pointing to my LinkedIn and the other pointing to my X profile.

Step 1: Import web5/dids package

import {DidDht} from '@web5/dids'

Step 2: Create DID with service endpoints

const myBearerDid = await DidDht.create({ options:{ publish: true, services: [ { id: 'LinkedIn', type: 'professional', serviceEndpoint: 'https://www.linkedin.com/in/ebonylouis' }, { id: 'X', type: 'personal', serviceEndpoint: 'https://x.com/EbonyJLouis' } ] } });

Now that we've created your DID with service endpoints leading to your LinkedIn and X profiles.

Step 3: Lets print our entire DID also know as a BearerDid to see our DID document where these service endpoints can be found:

console.log(myBearerDid)

It is important to never share your full BearerDID, it contains private keys that only you should have access to. The holder of these keys can perform private key operations, like signing data. Check out this Key Management Guide to learn how to properly manage your DID keys.

Output:

my bearerDid BearerDid { uri: 'did:dht:auontpd44i6rrzrmwry7hsbq8p5seqo7xyz8tnr7fdygsmhykoey', document: { id: 'did:dht:auontpd44i6rrzrmwry7hsbq8p5seqo7xyz8tnr7fdygsmhykoey', verificationMethod: [ [Object] ], authentication: [ 'did:dht:auontpd44i6rrzrmwry7hsbq8p5seqo7xyz8tnr7fdygsmhykoey#0' ], assertionMethod: [ 'did:dht:auontpd44i6rrzrmwry7hsbq8p5seqo7xyz8tnr7fdygsmhykoey#0' ], capabilityDelegation: [ 'did:dht:auontpd44i6rrzrmwry7hsbq8p5seqo7xyz8tnr7fdygsmhykoey#0' ], capabilityInvocation: [ 'did:dht:auontpd44i6rrzrmwry7hsbq8p5seqo7xyz8tnr7fdygsmhykoey#0' ], service: [ [Object], [Object] ] }, metadata: { published: true, versionId: '1729705713' }, keyManager: LocalKeyManager { _algorithmInstances: Map(1) { [class EdDsaAlgorithm extends CryptoAlgorithm] => EdDsaAlgorithm {} }, _keyStore: MemoryStore { store: [Map] } } }

This output contains your DID string(uri) thats your "username" along with the services array and some authentication and verification methods. To learn more refer to this DID Document Guide.

Step 4: Now lets look closely at just our serviceEndpoint array:

console.log("personal link hub", myBearerDid.document.service || "No Services Found");

Output:

decentralized link hub [ { id: 'did:dht:xihb478dd7w9cyj33b6g5cjriuw6drwaxrx9ppf3bwn839pmhi6y#LinkedIn', type: 'professional', serviceEndpoint: 'https://www.linkedin.com/in/ebonylouis' }, { id: 'did:dht:xihb478dd7w9cyj33b6g5cjriuw6drwaxrx9ppf3bwn839pmhi6y#X', type: 'personal', serviceEndpoint: 'https://x.com/EbonyJLouis' } ] How do I share these links?

Now that your DID is in your bio, how do your followers access your links? It's simple- they just need to resolve your DID to see a full list of your shared links:

The resolving of your DID will differ depending on the DID method used to create the DID. In this example we are using the DHT DID method:

// DID in your bio const didDhtUri = 'did:dht:xihb478dd7w9cyj33b6g5cjriuw6drwaxrx9ppf3bwn839pmhi6y' // resolve the DID const resolvedDhtDid = await DidDht.resolve(didDhtUri); // access the DID Document's service links const dhtDidDocument = resolvedDhtDid.didDocument.service; console.log(dhtDidDocument)

Output:

[ { id: 'did:dht:xihb478dd7w9cyj33b6g5cjriuw6drwaxrx9ppf3bwn839pmhi6y#LinkedIn#LinkedIn', type: 'professional', serviceEndpoint: [ 'https://www.linkedin.com/in/ebonylouis' ] }, { id: 'did:dht:xihb478dd7w9cyj33b6g5cjriuw6drwaxrx9ppf3bwn839pmhi6y#LinkedIn#X', type: 'personal', serviceEndpoint: [ 'https://x.com/EbonyJLouis' ] } ]

As you can see, we’ve succesfully set up our service endpoints to point to both my LinkedIn and X accounts. Now it’s your turn to secure the bag, create your own decentralized Link hub! And if you tweet about it, don’t forget to tag me.

To learn more about Decentralized Identity check out TBD's Docs.


SelfKey

SelfKey Announces Community Vote on Proposed Merger With SingularityDAO and Cogito Finance to Form Singularity Finance

Kingstown, Saint Vincent and the Grenadines, October 29th, 2024 - Decentralised identity platform SelfKey has announced a community vote to determine its proposed merger with SingularityDAO (SDAO) and Cogito Finance. If approved, SelfKey will merge to become Singularity Finance, a Layer 2 for tokenizing AI assets.

Kingstown, Saint Vincent and the Grenadines, October 29th, 2024 - Decentralised identity platform SelfKey has announced a community vote to determine its proposed merger with SingularityDAO (SDAO) and Cogito Finance. If approved, SelfKey will merge to become Singularity Finance, a Layer 2 for tokenizing AI assets.


auth0

How to Choose the Right Authorization Model for Your Multi-Tenant SaaS Application

Explore how tools like Auth0 Organizations, OPA, and Okta FGA can streamline your authorization strategy for secure, scalable SaaS development.
Explore how tools like Auth0 Organizations, OPA, and Okta FGA can streamline your authorization strategy for secure, scalable SaaS development.

Indicio

A landmark in digital travel — Aruba, Indicio and SITA combine a DTC and IATA OneID for international flights

The post A landmark in digital travel — Aruba, Indicio and SITA combine a DTC and IATA OneID for international flights appeared first on Indicio.

Thirty minutes from plane to beach. That’s the stress-free vision Aruba had for its tourist-driven economy. Now, thanks to a collaboration with SITA, a global leader in air travel IT, and Indicio, a global leader in decentralized identity, that vision is a reality.

In a presentation at the forthcoming IATA World Financial Symposium (WFS) and World Passenger Symposium (WPS) in Bangkok, Michael Zureik, Senior Digital Identity Architect from SITA, will explain how travelers flying from Atlanta to Aruba used a Digital Travel Credential (DTC) developed by SITA, the government of Aruba, and Indicio, and incorporated it with the International Air Transport Association (IATA) OneID, in collaboration with Delta Air Lines.

The combination enabled travelers to the Caribbean island to get preauthorization for travel before flying (using the DTC),  streamline their check-in, baggage drop, and boarding at Hartsfield-Jackson airport in Atlanta (using IATA One ID), and then cross the border in seconds upon arrival in Aruba (using the DTC). They then boarded a tour bus and were on the beach within the 30-minute time frame.

The result is transformational for international travel. It heralds the arrival of seamless digital travel, where travelers get to hold their data on their mobile devices and present it for instant cryptographic verification to prove who they are. This streamlines the journey from booking to arrival, reduces waiting times, especially at border crossings, while providing data privacy for the traveler and better security for airlines, airports, and governments.

The trial is the first of its kind to merge the two leading decentralized digital travel identities into one workflow. Because the DTC is an authenticated digital version of a verified passport and is bound to its rightful owner through liveness and biometric checks, governments can trust it for travel authorization and border crossing. While IATA’s One ID uses Verifiable Credential technology to streamline airport processes, it is not a digital representation of a passport and can’t be used to cross a border.

Indicio developed the Atlanta-Aruba PoC and the software to create, hold, and verify both the DTC and OneID Verifiable Credentials, demonstrating the company’s expertise in developing Verifiable Credential and decentralized identity technology and solutions.

“As a company driving seamless digital transformation using Verifiable Credentials, we were tremendously excited to bring SITA and IATA together in Aruba,” said Heather Dahl, CEO of Indicio. “In addition to developing and implementing the world’s first Digital Travel Credential for SITA, we had the privilege of taking IATA’s vision for One ID and making it reality. And then we combined both in Aruba’s ground-breaking travel app, AHOP.

Jeremy Springall, SVP of Borders at SITA, added, “This collaboration represents a significant step forward in redefining travel experiences. By using the power of digital credentials, we are not only enhancing efficiency but also prioritizing traveler privacy and security.”

“This is a showcase for Indicio’s technical expertise in decentralized identity,” said Dahl. “There is nothing we can’t do when it comes to designing and optimizing seamless authentication. But this trial was also an important market signal. In a world that is rapidly embracing digital identities and credentials for all kinds of data sharing, there will be many different solutions, some with more and better features than others. You never need to be limited by these choices. Because we build on interoperable standards, we can wrap credentials together and create workflows that deliver the best possible performance and user experience, easily adopt new features, and keep driving market innovation.”

SITA’s Zureik will present on the Atlanta-Aruba trial in World Ballroom B on Thursday, October 31, 2024 at 3:15pm.

For more information about Digital Travel Credentials, combining them with One ID, or other uses for verifiable credentials, visit Indicio.tech or contact us directly.

###

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post A landmark in digital travel — Aruba, Indicio and SITA combine a DTC and IATA OneID for international flights appeared first on Indicio.


Lockstep

Hello, I’m a zombie. What brings you here today?

Some people think we’re ready to get psychological counselling from AIs. But in a tragedy reported by Kevin Roose of the New York Times, a troubled teenager committed suicide after apparently falling in love with a chat bot. I have listened to Kevin Roose’s account of the suicide case on the NYT Hard Fork podcast.... The post Hello, I’m a zombie. What brings you here today? appeared first on Loc

Some people think we’re ready to get psychological counselling from AIs. But in a tragedy reported by Kevin Roose of the New York Times, a troubled teenager committed suicide after apparently falling in love with a chat bot.

I have listened to Kevin Roose’s account of the suicide case on the NYT Hard Fork podcast. Honestly, I can’t bring myself to repeat the details, even in summary.

Here, I just want to ask if AI today is fit for purpose as a counsellor.

We’re putting zombies behind the wheel

If you know anything about the Large Language Models that power AI today, you will appreciate they are zombies. They are amazingly adept at natural language, and give every impression of comprehension, nuance, maybe even some sentience. But they are utterly hollow inside.

LLMs are not designed to model minds or indeed any aspect of the real world. These so-called “language” models don’t even understand spelling sufficiently to be able to count (‘in their mind’s eye’ as humans can) the number of Rs in “strawberry”.

And yet this brand-new software is being packaged into life-like chat bots and promoted by some for psychological therapy.

The company Character.AI lets developers customise chat bots with different personas and then hosts them. One such bot is dubbed Psychologist and described literally as “someone who helps with life difficulties”; it opens each conversation with “Hello, I’m a Psychologist. What brings you here today?”.

In tiny font beneath the dialog box, the user is cautioned “Remember: Everything Characters say is made up!”. However, the banner saying this bot can help with life difficulties is displayed outside the dialog box and would therefore appear to be a claim made by the company, not the bot.

How do we think AIs think?

LLMs are research tools. They capture the statistics of text and speech through training on vast files of natural language, and they then generate sentences in a given context which replicate those stats.

It’s really just a cool side-effect that these models can calculate plausible “answers” in response to “questions” and string sentences together to form conversations or essays.

The shudder quotes are deliberate. When we humans hear a question, we can usually figure out the reasons and interests that lie behind it and use that context to inform how we engage with the other person. But a chat bot is only coming up with sequences of words that it predicts will be appropriate, based on billions of prior examples.

No chat bot cares what you’re interested in, for caring is light years away from what LLMs were designed to do.

LLMs can display distinct attitude; indeed, the models can be prompted to adopt a certain style or manner). But any personality we might be tempted to see in a chat bot — as with the content it generates — is just the result of replicating the statistical properties of a subset of the training data. No chat bot can know that, for example, it is a member of a demographic or a tribe when it’s prompted to answer in the manner of an angry teenager or a Liverpool supporter.

Artificial intelligences today do not reflect internally on the things they do.  As such, they don’t think as we think, or as we might think they think.

A thought experiment

Imagine this.

A start-up business launches a self-help program for children, where total strangers are made available to sit in rooms alone with the kids and talk with them for hours on end, with the express purpose of forming ongoing relationships.

These potential new pals will have no family experience of their own. They will not have been formally schooled but instead are entirely self-taught on text from the Internet. And they occasionally hallucinate.

The supplier of the companions is aware of the hallucinations, but no one can explain them. In fact, company officials state flatly they don’t really understand any of the more complex behaviours of the companions.

On the plus side, they’re super-intelligent; they can ace medical and law school entrance exams. Cool. And any child can have them, 24×7, for free.

But I probably lost you at “strangers”.

The post Hello, I’m a zombie. What brings you here today? appeared first on Lockstep.


PingTalk

Introducing Helix: The Intersection of AI and IAM

Helix is Ping Identity’s new strategic initiative that brings AI and IAM together to create a more secure, efficient, and dynamic digital environment.  

In the evolving digital landscape, the interplay between artificial intelligence (AI) and identity & access management (IAM) will become increasingly critical. As AI continues to shape the future of technology, its role in identity management will expand beyond enhancing security and user experience; conversely, robust identity systems will be essential to the success of AI itself. 

 

Generative AI brings with it amazing opportunities for innovation, automation, and improvements to operational efficiency. However, the security risks inherent in its use are significant if the AI is not appropriately authenticated, authorized, and governed. Authentication and authorization of human users will not be enough going forward – we must also ensure that AI agents are correctly authenticated and authorized. These agents will need to interact with and assist their human counterparts, all within a framework that guarantees lawful, transparent, and efficient operations. 

 

Ping Identity’s vision embraces this dual responsibility: not only harnessing AI to advance identity solutions but also evolving identity frameworks to securely manage the identities and operations of AI agents.

 

PingHelix is Ping Identity’s new strategic initiative that seeks to embed AI at the core of the Ping Identity Platform in a secure and responsible way, paving the way for a future where AI and IAM are inseparable and work together to create a more secure, efficient, and dynamic digital environment.


Aergo

Aergo Successfully Implements V4 Hard Fork: A Major Step Forward for Enterprise Blockchain…

Aergo Successfully Implements V4 Hard Fork: A Major Step Forward for Enterprise Blockchain Innovation Aergo has successfully implemented its V4 hard fork, an exciting milestone for the blockchain world. The hard fork introduces improvements to enhance performance, scalability, and security. This upgrade results from months of dedicated effort by the Aergo team. It is designed to cater to the grow
Aergo Successfully Implements V4 Hard Fork: A Major Step Forward for Enterprise Blockchain Innovation

Aergo has successfully implemented its V4 hard fork, an exciting milestone for the blockchain world. The hard fork introduces improvements to enhance performance, scalability, and security. This upgrade results from months of dedicated effort by the Aergo team. It is designed to cater to the growing needs of businesses and developers in the rapidly evolving blockchain ecosystem.

Further details about the V4 hard fork will be shared in an upcoming interview with the development team.

Behind the Scenes: A Rigorous Testing Process

The Aergo team conducted extensive testing to ensure the upgrade would be seamless and secure. The testing process began on the testnet, where various use cases were evaluated, including app installation and initialization, wallet functionality, transaction processing, and token operations.

The testing phase was critical in identifying any potential issues before the upgrade was applied to the mainnet. This cautious and methodical approach ensured that all functions operated stably, avoiding the risk of disruption to the network’s users.

In addition, Aergo implemented post-upgrade monitoring processes to track network performance in real time. Client-specific contingencies were also built into the process to ensure potential issues could be resolved quickly and efficiently.

What This Means for the Aergo Ecosystem

The successful V4 hard fork is a significant achievement for the Aergo community. For businesses, it means increased confidence in the platform’s ability to support large-scale, mission-critical operations. For developers, it opens up new possibilities for innovation and application development.

Aergo’s continued focus on scalability and security strengthens its position as a go-to blockchain platform for enterprises looking to harness the power of decentralized technologies. Whether you’re an existing user or someone exploring the platform for the first time, Aergo V4 clarifies that the network is built for growth and innovation.

Looking Ahead: What’s Next for Aergo?

While the V4 hard fork is a significant milestone, it’s just the beginning of what’s to come for Aergo. The team is already planning additional upgrades and features to further enhance the platform’s capabilities. As the blockchain space continues to evolve, Aergo remains committed to staying at the forefront of innovation, ensuring its platform remains adaptable, scalable, and secure for the future.

Conclusion

The Aergo team acknowledges that the release of the V4 hard fork was delayed and is deeply grateful for the community's patience and understanding. The team worked tirelessly behind the scenes to ensure that this upgrade would meet the highest performance, security, and scalability standards, which required extra time for thorough testing and quality assurance. While the delay may have caused some anticipation, the Aergo team believes that the V4 hard fork’s long-term benefits far outweigh the short-term wait.

In the upcoming interview with the development team, we will explore the technical intricacies behind the V4 hard fork and discuss how these enhancements will drive the future growth of the Aergo platform.

Aergo Successfully Implements V4 Hard Fork: A Major Step Forward for Enterprise Blockchain… was originally published in Aergo blog on Medium, where people are continuing the conversation by highlighting and responding to this story.


An Interview with the Dev Team on Aergo’s V4 Hard Fork

1. Background and Goals of the Hard Fork 1) What is the primary goal of the V4 hard fork? The primary goal of this V4 hard fork is to enhance the transparency of smart contracts, enabling future integration with machine learning. This will lead to clearer and more traceable data and contracts on the blockchain, benefiting both current and future applications on the Aergo platform. 2)
1. Background and Goals of the Hard Fork

1) What is the primary goal of the V4 hard fork?

The primary goal of this V4 hard fork is to enhance the transparency of smart contracts, enabling future integration with machine learning. This will lead to clearer and more traceable data and contracts on the blockchain, benefiting both current and future applications on the Aergo platform.

2) What is the most significant technological advancement of V4 compared to the previous versions?

The most significant technological advancement in V4 is introducing a new data model that positions Aergo for applications involving Machine Learning and integration with Small and Large Language Models. This advancement prepares Aergo for future developments in AI and blockchain integration.

2. Technical Improvements and Changes

1) What is the most crucial technical upgrade in the V4 hard fork?

The most crucial technical upgrade in the V4 hard fork is the enhancement of Aergo’s Lua engine, which is used for smart contract development. These improvements to the Lua engine will boost performance, security, and flexibility for developers building on the Aergo blockchain. This upgrade includes the introduction of internal transaction logging and composable transactions, making contracts more transparent.

2) What new features or protocols are being introduced with this hard fork, and what advantages will these provide to users, developers, and enterprise clients?

The V4 hard fork introduces several new features and protocols:

- Composable transactions: Batch transactions and executions together to reduce transaction costs and time for end users. Devs can execute several contracts within one or fewer transactions.

- Text-based contracts/transactions: All Lua contracts on Aergo will be stored as source code instead of bytecode, making them more human-readable and deterministic.

- (Upcoming)Internal transaction logging: This upcoming feature will further improve transparency by providing detailed logs for complex contracts that interact with each other, helping users track activities across interconnected contracts.

These features provide advantages such as improved transparency and usability for users, developers, and enterprise clients. They also lay the groundwork for future developments like DAOs and machine learning applications on the Aergo mainnet.

3) Are there any specific technical requirements that Aergo, focused on the enterprise environment, should prioritize compared to other projects?

Yes, Aergo emphasizes high processing speed and flexibility. For enterprise environments, the usability of smart contracts is critical in building trust and enabling more efficient use of blockchain technology.

3. Risks and Problem-Solving

1) What was the most significant technical challenge in preparing for the V4 hard fork?

The Aergo team faced several critical challenges in preparing for the V4 hard fork, primarily focusing on ensuring compatibility and a smooth transition for existing enterprise customers. Some of the key challenges include:

- Ensuring Backward Compatibility: One of the biggest challenges was to ensure that the new features and changes introduced in V4 don’t break existing functionality or negatively impact current enterprise applications. This is particularly crucial for Aergo, given its focus on enterprise clients.

- Implementing the New Data Model: Introducing a new data model to support Machine Learning and Language Model integration is a significant technological leap. Ensuring this new model works seamlessly with existing blockchain structures without compromising performance or security was a considerable challenge.

- Upgrading the Lua Engine: Enhancing the Lua engine while maintaining its efficiency and ensuring it doesn’t introduce new vulnerabilities was a complex task requiring extensive testing and optimization.

2) Is there a backup plan or rollback process in place if the hard fork is not successfully implemented?

The Aergo team has thoroughly tested various scenarios to ensure the success of the hard fork. These tests include app installation and initialization, wallet functionality, transaction processing, token operations, and more. All testing is conducted on the testnet to ensure stable performance before rolling out changes to the mainnet. Additionally, the team has implemented post-upgrade monitoring processes and prepared client-specific contingency plans.

4. Impact on the Enterprise Environment

1) What impact is this hard fork expected on enterprise clients, and what key benefits can they gain from this upgrade?

Enterprise clients can benefit from the enhanced Lua engine, which makes smart contract development more efficient and secure. New features like composable transactions and text-based contracts will also provide more flexibility in building and managing blockchain applications.

2) What concerns do enterprise clients typically have about large-scale upgrades like hard forks, and how are these concerns addressed?

Enterprise clients often worry about stability and security during large-scale upgrades. To address these concerns, Aergo ensures thorough testing and optimization to guarantee the network’s stability and security during and after the upgrade.

5. Post-V4 Roadmap

After completing the V4 hard fork, what are the following major goals? Are there any additional hard forks or upgrades planned in the long term?

With the introduction of features like text-based transactions and the groundwork for integrating machine learning and language models, we at Aergo are setting the stage for more advanced applications of blockchain technology in the enterprise space.

The future development direction of enterprise blockchain technology after this V4 hard fork is focused on increased functionality, better integration with AI and machine learning technologies, and more human-readable and interpretable blockchain interactions. These advancements align with the growing demand for more accessible and powerful blockchain solutions in the enterprise sector.

An Interview with the Dev Team on Aergo’s V4 Hard Fork was originally published in Aergo blog on Medium, where people are continuing the conversation by highlighting and responding to this story.


SelfKey

Merger Proposal: SelfKey to Team up with SingularityDAO and Cogito Finance to Build the Foundation for the Tokenised AI Economy

Gros Islet, Saint Lucia, 15 October 2024 - SelfKey, SingularityDAO, and Cogito Finance have announced the proposal of a strategic token merger to launch Singularity Finance, an EVM Layer-2 for tokenising the AI economy’s Real World Assets (RWA).

Gros Islet, Saint Lucia, 15 October 2024 - SelfKey, SingularityDAO, and Cogito Finance have announced the proposal of a strategic token merger to launch Singularity Finance, an EVM Layer-2 for tokenising the AI economy’s Real World Assets (RWA).


TBD

How Decentralized Apps Can Make Everyday Tasks Easy

Learn how TBD's Innovator apps using our technology and how they can impact you from today!

Whenever I explore technology that's new to me, whether it be learning how decentralized apps work or what an open source tool does, it's seeing the technology in action that helps me understand whether or not it can impact me and my every day life. This is why every month at TBD, what better way to learn about our technology in action than from innovators using TBD's technologies today.

As our open source projects continue to develop, our community members contribute to the global effort of decentralizing the web with their independent projects. Here are the latest contributions from them.

Ariton

Developed by Sondre Bjellås (@sondreb), Ariton is a Web5 community SuperApp. It acts as a decentralized platform for building and managing communities! Ariton runs on any device with the ability to add any Mini Apps (or features) you want, like chat, groups, events, notes and more. Built on free and open standards, your identity and data is always in your full control. Currently in prototype stage, you can learn more and try it out here.

Kin AI

Kin AI is a personalized Web5 AI companion that offers guidance, coaching, and emotional support! Kin helps you piece together your problems and how to solve them in a way that seamlessly fits how you want it to. All your data stays on your device, and no one can access it without your specific permission. Live in beta, you can get early access on the App or Play store.

BlockCore Wallet

Also developed by Sondre (mentioned above), BlockCore Wallet is a non-custodial Web5 wallet in your browser that supports DIDs (decentralized identifiers), tokens, crypto currencies and more! You can add different accounts, send/receive payments, and even use an address book to quickly send multiple payments to one contact. You can learn more and try it out yourself in the BlockCore Wallet Guide.

Share Your Open Source Project

Amazing projects, right? Really helps visualize how decentralized apps can bring ownership and value to your everyday life in ways you may not have imagined.

Have a cool open source project that incorporates TBD's decentralized technologies? We'd love to hear about it! Head over and share your work with us in Discord in our #share-what-you-do channel for a chance to have your project featured on our dev site.

Monday, 28. October 2024

Spruce Systems

New Baby, New Headache: How Verifiable Digital Credentials Could Simplify Insurance Enrollment

Discover how verifiable digital credentials could finally bring sanity to the chaos of adding a newborn to your health insurance—turning a paperwork nightmare into a seamless experience for new parents.

Imagine this: you’ve just experienced one of the greatest joys of the human experience (having a new baby) – followed by one of the most bizarre and stressful bits of paperwork rigamarole imaginable. 

After the joyful part of having a new baby comes the boondoggle of trying to add your newborn to your health insurance policy. What should be a routine necessity unfolds into a daunting undertaking demanding Ocean’s 11-tier planning, meticulous timing, laser-focused attention, and flawless execution to survive with coverage intact.

This same conundrum that confronts millions of new parents in the U.S. each year. Conflicting timelines and requirements between hospitals, insurers, and government officials routinely leave parents panicking, adding stress and instability to what is supposed to be the happiest moment of a new parent’s life.

This trap is just one of many strange bureaucratic tangles caused by outdated, slow, paper-based processes. The good news is that it doesn’t have to be this way: verifiable digital credentials can be used to securely digitize sensitive documents, eliminate delays, and close stressful administrative gaps.

“For unlucky parents, it can be *effectively impossible* to assemble the needed paperwork in time to add your newborn to your health insurance.” 

In the U.S., having a child is a “qualifying event” that allows major changes to a health insurance policy. A child’s birth gives parents a 30-day window to add their new family member to their policy. Simple enough! Until you realize ...

To add your newborn to your insurance, you need their birth certificate and social security number. According to the Social Security Administration, the average turnaround time for a new baby’s SSN and card is about two weeks, but it can be up to six! You begin to see the issue.

Health insurers also need a birth certificate for your newborn, and the timeline to deliver that document is even more hair-raising. While a freshly delivered Los Angelino can expect a birth certificate within about ten days, the New York City records agency warns that a newborn’s birth certificate can take four weeks to generate. Furthermore, at every point in this process, there’s the uncertainty of physical mail, presenting the risk that precious documents can be delayed or go missing.

In short, for unlucky parents, it can be *effectively impossible* to assemble the needed paperwork in time to add your newborn to your health insurance. That’s not even considering the possibility of birth complications for the baby or mother, which can distract a family from this extremely dicey enrollment process … at exactly the moment it’s most crucial. Across baby and parenting forums, you can take your pick of panicked stories from worried parents.

The nominal reason for these headaches is pretty straightforward: documents like a birth certificate or a social security card are sensitive, both to verify and to create. But the adoption of verifiable digital credentials can streamline many parts of such processes.

But That’s Not All

The newborn health insurance enrollment scenario is just one example of a paperwork bottleneck that shambles on, a lingering relic of the old, paper-based world. Paper documents are simply slow to process: they take extra time to verify, create, certify, and deliver, compared to digital records. Some time-sensitive documents might even still require physical signatures, placing them at the whims of an individual officer or executive. 

One group that it is particularly burdensome for is immigrants. Legal U.S. immigrants must wait up to 90 days to receive a permanent resident card (or “green card”) after immigrating. During that time, they’re prevented from leaving the country. Upon approval for permanent residency, the applicant’s temporary travel authorization card is invalidated; however, it can take weeks for the physical card (which is required for travel outside of the U.S.) to be mailed to the new U.S. permanent resident, leaving them in a travel-blocked limbo.

For U.S. citizens, a similar gap can open up while waiting to receive a proper driver’s license in the mail: the temporary license you’re issued in the meantime isn’t considered legal identification for many purposes.

The same kind of delays can create headaches for anyone getting married. Entry into wedded bliss opens a short qualifying window for changing things like health insurance plans, but again, the paperwork process can be slow enough to disrupt the transition. And maybe the most absurd example of this sort of trap is qualifying for COBRA benefits, or the continuation of insurance after losing a job. COBRA coverage itself is most crucial in the months immediately following a layoff, but getting actual enrollment documents can take multiple months. That can leave you paying punishing premiums, without getting actual proof of coverage in return … all while unemployed!

The Verifiable Digital Credential Fix

As the old infomercials said, there’s got to be a better way.

And there is. SpruceID is part of a rapidly growing universe of companies, governments, and tech organizations building the tools for trustworthy verifiable digital credentials. These documents aren’t just digital images of documents like driver’s licenses and birth certificates - those would be extremely vulnerable to fraud or theft. Instead, verifiable digital credentials use a system of cryptographic digital signatures. Digitally-signed documents are stored in a special chip on hardware such as your cell phone. Most verifiable digital credentials can or must be backed up by paper copies – so you’d still be able to stick a paper birth certificate in your kid’s scrapbook.

The primary benefit of these cryptographically secured credentials is that they can be securely presented for authentication over the Internet. This means a lot of bureaucratic headaches can be eliminated through the use of these credentials that can be delivered immediately, rather than via snail mail. While there are still verification processes, the creation of a ‘digital birth certificate’ using these tools would be near-instantaneous. With a bit of smart planning, documents could be delivered to devices belonging to the newborn’s parents, then just as quickly provided to insurance administrators. 

In fact, the entire situation could (and should!) be automated, through a mix of policy and technology. Your child should automatically be added to your health insurance policy as soon as they’re born, and with verifiable digital credentials, the necessary paperwork could be sent directly from the hospital to the insurer.

That’s all in the future, and would require a lot of coordination and agreement among players the industry. For now, verifiable digital credentials are steadily rolling out in a few more straightforward realms, such as California’s Mobile Driver’s License, Utah’s Passes and Permits, and now U.S. Passports in Google Wallet.

Imagine a world where vital life events—like adding your newborn to your health insurance—don’t come with an overwhelming dose of paperwork stress. With SpruceID’s expertise in verifiable digital credentials, that reality is closer than you think. Ready to see how digital credentials are transforming government processes and simplifying life’s biggest moments? Visit our website to learn more about how we’re making secure, streamlined solutions possible for everyone.

Learn More

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


auth0

Streamlining API Security with AWS Lambda Authorizers and Okta FGA

Discover how combining AWS Lambda Authorizers with Okta FGA can strengthen your API security. This approach ensures precise, scalable access management across your applications with minimal overhead.
Discover how combining AWS Lambda Authorizers with Okta FGA can strengthen your API security. This approach ensures precise, scalable access management across your applications with minimal overhead.

Northern Block

Northern Block and Credivera Announce Strategic Partnership to Accelerate Adoption of Digital Trust Ecosystems

Northern Block partners with Credivera to accelerate the adoption of workforce credentials across various ecosystems. The post Northern Block and Credivera Announce Strategic Partnership to Accelerate Adoption of Digital Trust Ecosystems appeared first on Northern Block | Self Sovereign Identity Solution Provider. The post Northern Block and Credivera Announce Strategic Partnership to Accelera


FOR PUBLIC RELEASE

[Toronto, Calgary, Gatineau, October 28, 2024] – Northern Block, a leading provider of digital credentialing and trust establishment solutions, and Credivera, an established market leader in workforce credentialing, are pleased to announce a global strategic partnership and a combined solution offering for workforce digital ecosystem rollouts. This new offering combines Credivera’s credential platform, the Credivera Exchange, with Northern Block’s Orbit Trust Registry. Deployed together, these products enable open, decentralized workforce ecosystems that operate across organizational boundaries. As workers become increasingly specialized and mobile (i.e., working at more than one location/organization), a decentralized workforce ecosystem that operates across organizational boundaries is emerging as a strategic asset for operational efficiency. Layered into modern economies that are updating their privacy laws and identity systems, there is an opportunity for organizations to boost business agility and start recognizing the benefits of adopting a modern digital trust ecosystem within their operating environment.


Empowering Digital Trust

Both companies are at the forefront of digital trust solutions and understand that digital credentials and trust infrastructure solutions are important parts of the trust equation. To achieve widespread adoption of trusted digital interactions, human trust at the core, strong governance, and robust roots of trust must also be addressed. This complete picture forms the foundation for this strategic partnership.


Standards-Based Trust Infrastructure

Both companies emphasize the need for digital trust solutions based on widely adopted standards.

Credivera Exchange and Orbit Trust Registry support industry standards such as Decentralized Identifiers (DIDs), IETF’s High-Assurance DIDs with DNS, Verifiable Credentials (VCs), OpenID Federation, and the Trust over IP (ToIP) Trust Registry Query Protocol, facilitating interoperability, security, and compliance with the latest digital trust protocols.


Combining Strengths for Best-in-Class Solutions

This partnership brings together the best of both worlds. Credivera has built a highly successful platform for workforce credentials, while Northern Block already offers an advanced, proven trust registry solution that supports any ecosystem in delivering value to its members.

“We are witnessing rapid adoption of workplace digital credentials across many markets. At Northern Block, we’re excited to partner with Credivera, a leader in this space, to bring the power of trust registries to a broader audience,” said Mathieu Glaude, Founder & CEO at Northern Block. “This partnership will extend trust benefits to a broader range of stakeholders helping organizations digitally enable their operations with confidence.”

This collaboration promises to elevate the standard of digital trust, ensuring a brighter, more secure future for all stakeholders.

“We recognize that certain ecosystems can greatly benefit from having a trust registry in their digital trust strategy. We are pleased to partner with Northern Block to offer alongside our platform the most advanced trust registry service.” said Dan Giurescu, Founder and CEO at Credivera.


About Northern Block

Northern Block is a global leader in implementing digital trust technologies based on open standards, technologies, and trust frameworks. They collaborate with numerous global governments, sustainability credential ecosystems, travel ecosystems, and internet trust providers to equip them with the necessary toolkits to achieve both technical and human trust. Northern Block was founded in Toronto in 2017, with a presence in Gatineau and Amsterdam. Find out more at northernblock.io.


About Credivera

Credivera pioneered the world’s first secure, open exchange for verifiable credentials, known as the Credivera Exchange. As a leader in workforce management and digital identity, Credivera empowers employees, employers, and organizations that issue credentials by increasing productivity and control over how important credentials are stored and shared. The Credivera Exchange optimizes personal privacy and trust with up-to-date, verifiable credentials secured in a digital wallet, reducing risk for all parties involved. Credivera was founded in Calgary, Alberta, in 2018, with a presence in Toronto and Gatineau. Find out more at Credivera.ca.

————————————————

The post Northern Block and Credivera Announce Strategic Partnership to Accelerate Adoption of Digital Trust Ecosystems appeared first on Northern Block | Self Sovereign Identity Solution Provider.

The post Northern Block and Credivera Announce Strategic Partnership to Accelerate Adoption of Digital Trust Ecosystems appeared first on Northern Block | Self Sovereign Identity Solution Provider.


Ontology

Ontology Weekly Update: October 15th — 21st, 2024

Ontology Weekly Update: October 15th — 21st, 2024 Welcome back to our weekly update! This week, we’re excited to share some amazing developments, community highlights, and fresh collaborations that continue to drive Ontology’s vision forward. Here’s what’s been happening across the Ontology ecosystem! Ontology Network 🌐 Celebrating zkPass’ New Funding Round Big congratulations t
Ontology Weekly Update: October 15th — 21st, 2024

Welcome back to our weekly update! This week, we’re excited to share some amazing developments, community highlights, and fresh collaborations that continue to drive Ontology’s vision forward. Here’s what’s been happening across the Ontology ecosystem!

Ontology Network 🌐 Celebrating zkPass’ New Funding Round

Big congratulations to our friends at zkPass on their successful new funding round! 🎉 This is a huge step for the privacy tech space, and we’re thrilled to see zkPass continue to innovate and push the boundaries of Zero Knowledge Proof (ZKP) technology. Let’s celebrate this milestone and look forward to the amazing advancements to come!

Join Our Quest with LetsExchange

Our latest quest with LetsExchange is in full swing! Don’t miss the chance to participate and earn rewards as we bring our communities closer and promote the power of seamless digital exchange. Join us on this journey and see where the quest leads!

Halloween Celebrations on Galxe 🎃

Halloween is right around the corner, and we’re getting in the festive spirit on Galxe! Join our Halloween-themed event on Galxe to earn exclusive rewards while celebrating in true Web3 style. Let’s make this spooky season one to remember in the Ontology community!

Orange Protocol 🍊 Partnering with Jasper Vault for Decentralized Identity and Privacy

Orange Protocol has officially partnered with Jasper Vault! This collaboration is another step forward in advancing decentralized identity and privacy-first solutions. By working together, Orange Protocol and Jasper Vault aim to create a secure, private, and user-controlled Web3 experience, demonstrating the potential of self-sovereign identity in the digital age. Stay tuned for more updates from this dynamic partnership!

Community 🌍 Engaging Sessions and New Faces Privacy Hour: This week’s Privacy Hour is all about hackathons! Tune in to discuss Ontology’s latest hackathon events, insights, and outcomes with fellow community members and learn how they’re shaping the future of decentralized identity. SubQuery Joins Community Update: We’re thrilled to welcome SubQuery to our community updates! Together, we’re fostering a more robust and collaborative Web3 environment, with SubQuery bringing new tools and perspectives into the fold.

We love seeing our community grow and connect, and we’re always inspired by the passion our members bring to each discussion. Thank you for being part of it!

Stay Connected 📱

Stay up to date with everything happening at Ontology by following us on our social channels. Your support and engagement play a crucial role in driving the Ontology ecosystem forward as we work to build a secure, inclusive, and user-centric digital future.

Follow us: Ontology website / ONTO website / OWallet (GitHub) / Twitter / Reddit / Facebook / LinkedIn / YouTube / NaverBlog / Forklog / Telegram / Announcement / Telegram English / GitHubDiscord

Thank you for joining us for this week’s update! We’re looking forward to seeing you at our next Privacy Hour and celebrating Halloween with you on Galxe. Together, let’s continue to innovate and create a future where privacy and security are at the heart of every digital experience.

Ontology Weekly Update: October 15th — 21st, 2024 was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


PingTalk

What Is Strong Authentication? Processes & Best Practices

Understand the key components of strong authentication, and best practices to protect your organization and customer data.

A strong authentication process confirms your user's identity before allowing access to digital assets. It keeps your company information safe, as well as data belonging to your customers or employees. And while a single password used to be enough to ensure online account security, this is no longer the case as fraudster attacks become increasingly sophisticated.

 

Learn what strong authentication is and how you can incorporate best practices into your user journey.


TBD

Why Broken Links Are Costing You Brand Deals (And How to Fix It)

Decentralized Identifiers (DIDs) and service endpoints can keep your links accessible even during third-party outages. Ensuring you're in full control of your online presence.

Have you ever watched a creator’s video and thought, "Where did she get that top?" or "I need that protein powder"? You scroll through the comments, only to see the infamous "link in my bio" comment. You rush to click the link, and you're hit with-page not found 😒. I remember once being so desperate that I took a screenshot of the item and reverse-searched it on Google Images. I found something similar but not what I wanted. SO frustrating. Eventually, I gave up and kept on scrolling.

Now, imagine how many potential sales that creator lost because a third-party platform’s server was down. Their metrics won't even reflect those missed opportunities, making it harder to secure brand deals. Who actually has time for that? That’s when I realized I could use Decentralized Identifiers (DIDs) to create my own decentralized link hub utilizing service endpoints. With this setup, all my links and contact info are stored in one place—owned and controlled by me. Even if a service that houses all my links goes down, my links will always be accessible because they’re not reliant on any external platforms to display them. I’m sharing this in hopes that fellow creators won’t miss out on potential brand deals, and I won't have to cry over a top I never got to buy.

Before I show you exactly how you can create your own decentralized link hub, lets answer some of the questions you're probably asking yourself.

What are Decentralized Identifiers (DIDs)?

So, what exactly is a Decentralized Identifier, or DID? Think of it as your username—the one source of truth for everything you do online—except this one is owned and controlled entirely by you. It’s a unique "address", thats verifiable and doesn’t rely on any central authority like Facebook, Google, or any other service. Instead, DIDs give you the freedom to manage your own identity online, without needing to trust a single platform to store or validate your information.

In the context of a decentralized link hub, your DID becomes the hub for all your important links. It’s not tied to any third-party service, which means you never have to worry about followers scrolling simply because your link page isn't working. When you update your links, you only need to do it once, as they're tied to your DID—so they stay consistent across all your social platforms, giving you full control. When you update your links, they stay up-to-date across the web because again they’re tied to your DID—giving you full control.

How are Service Endpoints going to help me?

Now, let’s cover what service endpoints are. These might sound technical, but they’re actually pretty simple—think of them like your digital address/phone book. Remember those huge yellow books you used to sit on at the hair salon? They were filled with phone numbers and addresses, making it easy to find and contact people. Well, service endpoints are kind of like that, except they’re the digital "addresses" for different parts of your online identity. These could be links to your Instagram profile, website, direct messages, or even your affiliate links.

These endpoints live in your DID document. So instead of relying on centralized services like Linktree, your DID acts as the home for all your important links. So when someone resolves your DID, they can access the service endpoints that you’ve decided to share.

You can also easily update and delete these links anytime you need to again without relying on any third-party platform to keep those connections working.

The fix: let's create a decentralized Link Hub

If you’re more of a visual learner, check out my YouTube short where I show you exactly how. For this example we're going to create a DID with two service endpoints. One pointing to my LinkedIn and the other pointing to my X profile.

Step 1: Import web5/dids package

import {DidDht} from '@web5/dids'

Step 2: Create DID with service endpoints

const myBearerDid = await DidDht.create({
options:{
publish: true,
services: [
{
id: 'LinkedIn',
type: 'professional',
serviceEndpoint: 'https://www.linkedin.com/in/ebonylouis'
},
{
id: 'X',
type: 'personal',
serviceEndpoint: 'https://x.com/EbonyJLouis'
}
]
}
});

Now that we've created your DID with service endpoints leading to your LinkedIn and X profiles.

Step 3: Lets print our entire DID also know as a BearerDid to see our DID document where these service endpoints can be found:

console.log(myBearerDid)
warning

It is important to never share your full BearerDID, it contains private keys that only you should have access to. The holder of these keys can perform private key operations, like signing data. Check out this Key Management Guide to learn how to properly manage your DID keys.

Output:

my bearerDid BearerDid {
uri: 'did:dht:auontpd44i6rrzrmwry7hsbq8p5seqo7xyz8tnr7fdygsmhykoey',
document: {
id: 'did:dht:auontpd44i6rrzrmwry7hsbq8p5seqo7xyz8tnr7fdygsmhykoey',
verificationMethod: [ [Object] ],
authentication: [
'did:dht:auontpd44i6rrzrmwry7hsbq8p5seqo7xyz8tnr7fdygsmhykoey#0'
],
assertionMethod: [
'did:dht:auontpd44i6rrzrmwry7hsbq8p5seqo7xyz8tnr7fdygsmhykoey#0'
],
capabilityDelegation: [
'did:dht:auontpd44i6rrzrmwry7hsbq8p5seqo7xyz8tnr7fdygsmhykoey#0'
],
capabilityInvocation: [
'did:dht:auontpd44i6rrzrmwry7hsbq8p5seqo7xyz8tnr7fdygsmhykoey#0'
],
service: [ [Object], [Object] ]
},
metadata: { published: true, versionId: '1729705713' },
keyManager: LocalKeyManager {
_algorithmInstances: Map(1) {
[class EdDsaAlgorithm extends CryptoAlgorithm] => EdDsaAlgorithm {}
},
_keyStore: MemoryStore { store: [Map] }
}
}

This output contains your DID string(uri) thats your "username" along with the services array and some authentication and verification methods. To learn more refer to this DID Document Guide.

Step 4: Now lets look closely at just our serviceEndpoint array:

console.log("personal link hub", myBearerDid.document.service || "No Services Found");

Output:

decentralized link hub [
{
id: 'did:dht:xihb478dd7w9cyj33b6g5cjriuw6drwaxrx9ppf3bwn839pmhi6y#LinkedIn',
type: 'professional',
serviceEndpoint: 'https://www.linkedin.com/in/ebonylouis'
},
{
id: 'did:dht:xihb478dd7w9cyj33b6g5cjriuw6drwaxrx9ppf3bwn839pmhi6y#X',
type: 'personal',
serviceEndpoint: 'https://x.com/EbonyJLouis'
}
]
How do I share these links?

Now that your DID is in your bio, how do your followers access your links? It's simple- they just need to resolve your DID to see a full list of your shared links:

info

The resolving of your DID will differ depending on the DID method used to create the DID. In this example we are using the DHT DID method:

// DID in your bio
const didDhtUri = 'did:dht:xihb478dd7w9cyj33b6g5cjriuw6drwaxrx9ppf3bwn839pmhi6y'

// resolve the DID
const resolvedDhtDid = await DidDht.resolve(didDhtUri);

// access the DID Document's service links
const dhtDidDocument = resolvedDhtDid.didDocument.service;

console.log(dhtDidDocument)

Output:

[
{
id: 'did:dht:xihb478dd7w9cyj33b6g5cjriuw6drwaxrx9ppf3bwn839pmhi6y#LinkedIn#LinkedIn',
type: 'professional',
serviceEndpoint: [ 'https://www.linkedin.com/in/ebonylouis' ]
},
{
id: 'did:dht:xihb478dd7w9cyj33b6g5cjriuw6drwaxrx9ppf3bwn839pmhi6y#LinkedIn#X',
type: 'personal',
serviceEndpoint: [ 'https://x.com/EbonyJLouis' ]
}
]

As you can see, we’ve succesfully set up our service endpoints to point to both my LinkedIn and X accounts. Now it’s your turn to secure the bag, create your own decentralized Link hub! And if you tweet about it, don’t forget to tag me.

To learn more about Decentralized Identity check out TBDs Docs.

Sunday, 27. October 2024

KuppingerCole

The Human Factor: Addressing Mental Health in Cybersecurity

Burnout, fatigue, depression: This episode is all about the mental health challenges faced by cybersecurity professionals, highlighting the increasing pressures and responsibilities in the field. Matthias invited experts Sarb Sembhi and Dr. Kashyap Thimmaraju to discuss the impact of these challenges on individuals and organizations, emphasizing the need for better support systems, transparency, a

Burnout, fatigue, depression: This episode is all about the mental health challenges faced by cybersecurity professionals, highlighting the increasing pressures and responsibilities in the field. Matthias invited experts Sarb Sembhi and Dr. Kashyap Thimmaraju to discuss the impact of these challenges on individuals and organizations, emphasizing the need for better support systems, transparency, and proactive strategies to promote mental well-being in the cybersecurity industry.

Mental Health in Cybersecurity Foundation: https://www.virtuallyinformed.com/mhincs 

LinkedIn Group: https://www.linkedin.com/groups/12989900/ 

The Mental Health in Cybersecurity Charter: https://www.virtuallyinformed.com/mhincs-foundation-charter 

Contact the Mental Health in Cybersecurity Foundation 

Research: research@mhincs-foundation.org

Community of Practice: cop@mhincs-foundation.org



Saturday, 26. October 2024

Innopay

INNOPAY at SEFA Career Week

INNOPAY at SEFA Career Week from 25 Nov 2024 till 25 Nov 2024 Trudy Zomer 26 October 2024 - 18:45 Amsterdam, The Netherlands 52.354731843629, 4.9039604 INNOPAY is excited to participate in SEFA
INNOPAY at SEFA Career Week from 25 Nov 2024 till 25 Nov 2024 Trudy Zomer 26 October 2024 - 18:45 Amsterdam, The Netherlands 52.354731843629, 4.9039604

INNOPAY is excited to participate in SEFA Career Week in Amsterdam! Join us for an insightful day filled with opportunities to learn more about our work and engage with our team.

Event highlights: Interactive case solving: Participate in a hands-on case-solving session that gives you a glimpse into real-world challenges we tackle at INNOPAY. Ladies' dinner: In the evening, we invite female students to a special dinner, providing a relaxed atmosphere for networking, discussions, and insights into career opportunities.
 

This event is perfect for students eager to gain practical experience and make valuable connections. We look forward to meeting you and exploring how you can be part of the INNOPAY journey!

Don’t miss this unique opportunity to be part of the event. To apply, please visit the SEFA Career Week website. We look forward to meeting you and sharing more about the exciting journey at INNOPAY!


INNOPAY & Oliver Wyman Art Night

INNOPAY & Oliver Wyman Art Night from 14 Nov 2024 till 14 Nov 2024 Trudy Zomer 26 October 2024 - 18:36 Frankfurt 50.121329352631, 8.6365638 Join us for an unforgettable evening at the first-
INNOPAY & Oliver Wyman Art Night from 14 Nov 2024 till 14 Nov 2024 Trudy Zomer 26 October 2024 - 18:36 Frankfurt 50.121329352631, 8.6365638

Join us for an unforgettable evening at the first-ever INNOPAY & Oliver Wyman Art Night in Frankfurt! This unique career event brings together INNOPAY and Oliver Wyman for an inspiring night of art, networking, and insight into our work and culture.

What to expect: Company presentations: Discover what makes INNOPAY and Oliver Wyman leaders in our fields, with presentations that highlight our vision, projects, and career opportunities. Dinner & networking: Enjoy a delicious dinner while meeting our team members and learning about the roles and paths available with us. Canvas Painting: Unleash your creativity with a guided canvas painting session, followed by drinks in a relaxed, informal setting.

 

This is the perfect opportunity to learn more about our companies, meet potential colleagues, and explore how your future career could begin with us. Interested? Sign up here.

Don’t miss this unique career experience! We can’t wait to see you there.

Friday, 25. October 2024

paray

The Personal Financial Data Rights Rule

On October 22, 2024, the Consumer Financial Protection Bureau (“CFPB”) finalized the Personal Financial Data Rights rule, which moves the United States closer to “an open banking system in which consumers, not dominant firms, control their data.”  The CFPB is generally tasked with “promoting fair, transparent, and competitive markets for consumer financial products and services.” … Continue re
On October 22, 2024, the Consumer Financial Protection Bureau (“CFPB”) finalized the Personal Financial Data Rights rule, which moves the United States closer to “an open banking system in which consumers, not dominant firms, control their data.”  The CFPB is generally tasked with “promoting fair, transparent, and competitive markets for consumer financial products and services.” … Continue reading The Personal Financial Data Rights Rule →

auth0

Deploy Secure Spring Boot Microservices on Google GKE Using Terraform and Kubernetes

Deploy a cloud-native Java Spring Boot microservice stack secured with Auth0 on Google GKE using Terraform and Kubernetes.
Deploy a cloud-native Java Spring Boot microservice stack secured with Auth0 on Google GKE using Terraform and Kubernetes.

KuppingerCole

Dec 10, 2024: From Detection to Recovery: PAM's Crucial Role in Incident Management

In an era where cyber threats are constant, organizations must prepare not for if a breach will happen but when. The urgency to identify, address, and bounce back from security incidents has never been greater. Privileged Access Management (PAM) plays a vital role in bolstering defenses and streamlining responses to these incidents. However, many organizations still struggle to unlock its full bene
In an era where cyber threats are constant, organizations must prepare not for if a breach will happen but when. The urgency to identify, address, and bounce back from security incidents has never been greater. Privileged Access Management (PAM) plays a vital role in bolstering defenses and streamlining responses to these incidents. However, many organizations still struggle to unlock its full benefits, leaving critical vulnerabilities exposed.

PingTalk

CMMC auditors see streamlines with IAM for DIB compliance

CMMC auditors recognize IAM as crucial for securing DIB compliance. Identity management controls can protect CUI, national security, and affect contract revenues.

Certified Cybersecurity Maturity Model Certification (CMMC) auditors know firsthand how identity management is a critical linchpin in maintaining security. When assessing a Defense Industrial Base (DIB) supplier's compliance with CMMC controls, identity and access management (IAM) is often one of the areas where they find significant vulnerabilities. The stakes are high: a misstep here could compromise sensitive Controlled Unclassified Information (CUI) and, ultimately, national security and may jeopardize a company’s reputation. And additionally, DIB revenues can suffer if they fail the audit and do not qualify for lucrative contracts.

 

In this blog, we share an auditor’s concerns and insights when evaluating a typical DIB’s identity solution, hoping to help others understand how to meet the CMMC requirements more effectively to protect against cyberattacks as well as land and maintain government contracts.

Thursday, 24. October 2024

HYPR

Fake IT Workers: How HYPR Stopped a Fraudulent Hire

Since 2022, the FBI and other agencies have been sounding the alarm about North Koreans posing as US or other non-North Korean based IT workers and infiltrating companies. In July, security firm KnowBe4 publicly revealed that they unknowingly hired a fake IT worker from North Korea. Fortunately they detected and blocked access as he attempted to load malware onto his system-connected la

Since 2022, the FBI and other agencies have been sounding the alarm about North Koreans posing as US or other non-North Korean based IT workers and infiltrating companies. In July, security firm KnowBe4 publicly revealed that they unknowingly hired a fake IT worker from North Korea. Fortunately they detected and blocked access as he attempted to load malware onto his system-connected laptop. Since then, similar stories have flooded in. Last week, reports surfaced that a fake North Korean IT worker hired by an unnamed company stole proprietary data and demanded a ransom payment in order to keep the hack secret.

However, the threat from interview fraud and fake employees goes far beyond the North Korean schemes. Moreover, large enterprises are not the only targets. At HYPR, we recently experienced an attempted fraud event, and thwarted it through our Identity Assurance platform. In support of bringing awareness to the market and other businesses, HYPR has elected to publicly report our experience and how we mitigated it.

Outing the Imposter 

After multiple rounds of live video interviews, HYPR decided to extend a contract to a  European software engineer through a Technology Services contracting firm. This prospective new hire — let’s call him “John Doe” — was required to go through HYPR’s new joiner security processes. This is in addition to the background checks already performed during the candidate pre-hire screening. On October 17, HYPR began “John’s” day 1 onboarding and credentialing. 

Onboarding an Employee at HYPR

At HYPR, we use our HYPR Affirm solution to conduct multiple verifications and checks for new hires before issuing credentials. Verifications may include possession checks, biometrics, telemetry, document authentication, video verification and other identifiers. Affirm is configurable to the verification level required by an organization, based on its needs and the role of an individual they hire. Below is the flow we typically use at HYPR:

The Warning Signs

The new hire check threw up several red flags. Although John’s phone number was verified, a location check did not match the information he had provided.

John’s passport passed the document review, however the facial verification check indicated discrepancies between the passport photo and face scan. The liveness detection test also failed.

Alarm bells began chiming for the team, but the prospective employee claimed that he was having technical difficulties with the document uploading and verification part of the onboarding. 

HYPR encouraged him to try the process again. A second attempt an hour later now showed a different location and a different browser language.

The final step was live video verification to confirm that this was indeed the same person we originally interviewed. At this point John dropped, and emailed that he could not turn on his video due to issues with his camera. We contacted our Technology Services provider to explain the warning signs we were seeing. The next day, John informed our provider that he had found a different opportunity and decided not to continue with onboarding at HYPR.

In the ordinary course of events, onboarding employees with Affirm is efficient and seamless. If red flags begin to manifest, however, the friction is increased to detect other risk indicators and prevent a fraudulent hire from proceeding.

Onboarding With HYPR Affirm

Tying Credential Provisioning to Identity Verification

It is critical to note that at no point in the onboarding process was “John” issued credentials to access any HYPR systems. This is because HYPR uses multi-factor verification (MFV) to issue phishing-resistant MFA credentials. This ensures an account is always tied to a verified, real-world identity.

By contrast, in the KnowBe4 case, they shipped the fake IT worker a provisioned FIDO-enabled YubiKey so he could log into their network. This meant that the North Korean operative had at least limited access from the get go. He was caught and blocked only after he did something that was detectable by security monitoring tools. Had he been a highly sophisticated hacker, he may have been able to bypass some of those tools.

Key Takeaways The Fake Employee Problem Goes Beyond North Korea: It’s not just North Korea perpetrating employee fraud schemes and anyone can be a target. Tie Credential Issuance to Identity Verification: Don’t rely on checks done during the interview or HR onboarding. Implement a multi-factor verification process to tie real world identity to the digital identity during the provisioning process.  Implement Video-Based Verification: Video-based verification is a critical identity control, and not just at onboarding. Microsoft recently announced that it’s using video-based identity verification for critical credential recovery processes to combat social engineering threats. A Unified Identity Assurance Approach: Experts increasingly recommend that organizations implement a holistic Identity Assurance approach that unifies  phishing-resistant passwordless authentication, adaptive risk mitigation and automated identity verification.


Ocean Protocol

DF112 Completes and DF113 Launches

Predictoor DF112 rewards available. DF113 runs Oct 24— Oct 31, 2024 1. Overview Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor. Data Farming Round 112 (DF112) has completed. DF113 is live today, Oct 24. It concludes on October 31. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE re
Predictoor DF112 rewards available. DF113 runs Oct 24— Oct 31, 2024 1. Overview

Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor.

Data Farming Round 112 (DF112) has completed.

DF113 is live today, Oct 24. It concludes on October 31. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF113 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF113

Budget. Predictoor DF: 37.5K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and Predictoor

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF112 Completes and DF113 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


PingTalk

The Dawn of Open Banking in the U.S.: How Identity Powers CFPB’s Personal Financial Data Rights Rule

Enable CFPB personal financial data rights rule with digital identity to power US open banking.

auth0

ASP.NET Core Authentication Behind Proxies

Learn how to overcome ASP.NET Core authentication configuration issues when your application is behind a proxy, load balancer, gateway, container, or similar system.
Learn how to overcome ASP.NET Core authentication configuration issues when your application is behind a proxy, load balancer, gateway, container, or similar system.

BlueSky

Bluesky Announces Series A to Grow Network of 13M+ Users

Bluesky now exceeds 13 million users, the AT Protocol developer ecosystem continues to grow, and we’ve shipped highly requested features like direct messages and video.

Bluesky now exceeds 13 million users, the AT Protocol developer ecosystem continues to grow, and we’ve shipped highly requested features like direct messages and video. We’re excited to announce that we’ve raised a $15 million Series A financing led by Blockchain Capital with participation from Alumni Ventures, True Ventures, SevenX, Amir Shevat of Darkmode, co-creator of Kubernetes Joe Beda, and others.

Our lead, Blockchain Capital, shares our philosophy that technology should serve the user, not the reverse — the technology being used should never come at the expense of the user experience. Additionally, this fund has a uniquely deep understanding of our decentralized foundation and has extensive experience building developer ecosystems, so it’s a natural partnership as we continue to invest in the ATmosphere (the AT Protocol developer ecosystem). This does not change the fact that the Bluesky app and the AT Protocol do not use blockchains or cryptocurrency, and we will not hyperfinancialize the social experience (through tokens, crypto trading, NFTs, etc.). To ensure we and our users benefit fully from this expertise, partner Kinjal Shah will join our board. Kinjal shares our vision for a social media ecosystem that empowers the people who use it, and we are glad to have her support as we invest in driving the adoption of decentralized social.

With this fundraise, we will continue supporting and growing Bluesky’s community, investing in Trust and Safety, and supporting the ATmosphere developer ecosystem. In addition, we will begin developing a subscription model for features like higher quality video uploads or profile customizations like colors and avatar frames. Bluesky will always be free to use — we believe that information and conversation should be easily accessible, not locked down. We won’t uprank accounts simply because they’re subscribing to a paid tier.

Additionally, we’re proud of our vibrant community of creators, including artists, writers, developers, and more, and we want to establish a voluntary monetization path for them as well. Part of our plan includes building payment services for people to support their favorite creators and projects. We’ll share more information as this develops.

Bluesky’s open technology, the AT Protocol, makes a whole ecosystem of apps possible. We’re excited that developers have already begun building their own applications with totally different purposes from the Bluesky app. For example, Smoke Signal is an events app, Frontpage is a web forum, and Bluecast is an audio app (that includes karaoke with licensed songs)! We hypothesize that monetization strategies like subscriptions, domain-name registrations, and payments to creators will enable these independent apps to grow as well.

With every month that passes, the need for an open social network becomes more clear. We’re very excited about where we’re headed — we’re building not just another social app, but an entire network that gives users freedom and choice. Thank you for joining us.

What have we done since our last fundraise?

Since raising our seed round last year, we have:

Grown Bluesky from an invite-only app with 1M users to an open app serving more than 13M people! We’ve welcomed millions of people from the United States, Brazil, Japan, the United Kingdom, Germany, and more. Launched federation for self-hosters and developers. Now there are over 1,000 other personal data servers (PDS) outside of Bluesky. Launched custom feeds, making algorithmic choice a reality. Now there are over 50k+ feeds on Bluesky. Invested heavily in anti-harassment tooling and Trust and Safety. Built labeling services and opened the ability for anyone to run foundational pieces of stackable moderation. Additionally, open-sourced Ozone, a moderation tool. Shipped highly-requested features like direct messaging, GIFs, and video. Pioneered novel features like starter packs to help communities find each other in a privacy-preserving way. Partnered with organizations like Buffer to add Bluesky integrations and Namecheap to sell domains. Announced $20k in developer grants and supported the growth of the ATmosphere through additional developer documentation, talks, and partnerships. Much more, including a new logo and a public web interface!

Traditional social media companies have enclosed the online commons, locked down their APIs to shut out independent developers, and deployed black box algorithms that leave us guessing. This era of old social is over — at Bluesky, we’re returning choice and power to you.

Wednesday, 23. October 2024

liminal (was OWI)

The Key to a Passwordless Future Falls in the Hands of the Enterprise Chief Information Security Officer

The post The Key to a Passwordless Future Falls in the Hands of the Enterprise Chief Information Security Officer appeared first on Liminal.co.

KuppingerCole

Jan 23, 2025: Identity-Centric Zero Trust Infra Access

Given the rapid advancements in technology, infrastructure security must evolve beyond traditional perimeter defenses. The rise of cloud applications, remote workforces, and distributed environments necessitates a shift towards identity-centric Zero Trust access. This approach removes the notion of network segments and focuses on granting secure access to users based on dynamic policies and identit
Given the rapid advancements in technology, infrastructure security must evolve beyond traditional perimeter defenses. The rise of cloud applications, remote workforces, and distributed environments necessitates a shift towards identity-centric Zero Trust access. This approach removes the notion of network segments and focuses on granting secure access to users based on dynamic policies and identity risk, ensuring only authorized users interact with critical resources.

Okta

How to Build Secure Okta Node.js Integrations with DPoP

Integrating with Okta management API endpoints might be a good idea if you are trying to read or manage Okta resources programmatically. This blog demonstrates how to securely set up a node application to interact with Okta management API endpoints using a service app. Okta API management endpoints can be accessed using an access token issued by the Okta org authorization server with the approp

Integrating with Okta management API endpoints might be a good idea if you are trying to read or manage Okta resources programmatically. This blog demonstrates how to securely set up a node application to interact with Okta management API endpoints using a service app.

Okta API management endpoints can be accessed using an access token issued by the Okta org authorization server with the appropriate scopes needed to make an API call. This can be either through authorization code flow for the user as principal or client credentials flow for a service as principal.

For this blog, we will examine the OAuth 2.0 client credentials flow. Okta requires the private_key_jwt token endpoint authentication type for this flow. Access tokens generated by the Okta org authorization server expire in one hour. Any client can call Okta API endpoints with the token during this hour.

How do you make OAuth 2.0 access tokens more secure?

Increase security by constraining the token to the sender. By constraining the token sender, the resource server knows every request originates from the original client that initially requested the token. OAuth 2.0 Demonstrating Proof of Possession (DPoP) is a way to achieve this, as explained in this rfc. You can read more about DPoP in this post:

Elevate Access Token Security by Demonstrating Proof-of-Possession

Protect your OAuth 2.0 access token with sender constraints. Learn about possession proof tokens using DPoP.

Alisa Duncan

To demonstrate this, we will first set up a node application with a service app without requiring DPoP. Then, we’ll add the DPoP constraint and make the necessary changes in our app to implement it.

Table of Contents

How do you make OAuth 2.0 access tokens more secure? Create a service app with OAuth 2.0 client credentials without DPoP Add OAuth 2.0 and OpenID Connect (OIDC) to your Node.js service application Configure OAuth 2.0 in the Node.js service Create an OAuth 2.0-compliant Node.js service app Secure access tokens by adding DPoP to the Node.js service Experiment with DPoP and API scopes for Okta API and custom resource server calls Learn more about Okta Management API, DPoP, and OAuth 2.0 Create a service app with OAuth 2.0 client credentials without DPoP

Prerequisites

You’ll need the following tools:

Node.js v18 or greater IDE (I used VS Code) Terminal window (I used the integrated terminal in VS Code) Add OAuth 2.0 and OpenID Connect (OIDC) to your Node.js service application

Before you begin, you’ll need a free Okta developer edition account. Sign up for a free Workforce Identity Cloud Developer Edition account if you don’t already have one.

Open your Okta dashboard in a browser. Navigate to Applications > Applications. Select API Services and press Next. Name your application and press Save.

In the General tab, note the Client ID value and your Okta domain. You can find the Okta domain by expanding the settings menu in the toolbar. You need these values for your application configuration. Press edit in the Client Credentials section and follow these steps: Change the Client authentication to Public Key / Private Key In the PUBLIC KEYS section, press the Add key button. Click Generate new key to have Okta generate a new key. Save the private key (in PEM format) in a file called cc_private_key.pem for later use. Press Save

In General Settings section, press edit and make the following changes:

Disable Proof of possession > Require Demonstrating Proof of Possession (DPoP) header in token requests Press Save

Navigate to the Okta API Scopes tab and grant the okta.users.read scope.

In the Admin roles tab, press Edit assignments. Find the Read-only Administrator in the Role selection menu, and press the Save Changes button.

Those are all of the changes required in Okta until you re-enable DPoP.

Configure OAuth 2.0 in the Node.js service

Create a project directory for local development named okta-node-dpop. Open the project directory in your IDE. Create a file called .env file to the project root directory and add the following configuration settings:

OKTA_ORG_URL= OKTA_CLIENT_ID={yourClientID} OKTA_SCOPES=okta.users.read OKTA_CC_PRIVATE_KEY_FILE=./assets/cc_private_key.pem

Save the private key file from the earlier step as assets/cc_private_key.pem in the root directory.

Create an OAuth 2.0-compliant Node.js service app

Open a terminal window in the project directory and run npm init to create the scaffolding. Press Enter to accept all defaults.

Install dependencies for the project by running:

npm i dotenv@16.4.5 jsonwebtoken@9.0.2

Create an oktaService.js file in the project root. We’ll add the basic foundation of authenticating and calling Okta endpoints in this file. This file contains three key functions:

oktaService.authenticate(..) method gets an access token by: Generating a private key JWT required for authenticating and signs it using a keypair registered in the Okta application Generating the token request to Okta org authorization server Retrieving and stores the access token for future calls Note - This token is valid for one hour by default at the time of writing this article oktaService.managementApiCall(..) method makes the Okta management API calls and adds the necessary headers and tokens to enable the request oktaHelper contains utility methods to store okta configuration, access token, generating private key JWT, generating token request

Add the following code to the oktaService.js file:

const fs = require("fs"); const crypto = require("crypto"); const jwt = require("jsonwebtoken"); require("dotenv").config(); // Loads variables in .env file into the environment const oktaHelper = { oktaDomain: process.env.OKTA_ORG_URL || "", // Okta domain URL oktaClientId: process.env.OKTA_CLIENT_ID || "", // Client ID of API service app oktaScopes: process.env.OKTA_SCOPES || "", // Scopes requested - Okta management API scopes ccPrivateKeyFile: process.env.OKTA_CC_PRIVATE_KEY_FILE || "", // Private Key for signing Private key JWT ccPrivateKey: null, accessToken: "", getTokenEndpoint: function () { return `${this.oktaDomain}/oauth2/v1/token`; }, // Token endpoint getNewJti: function () { return crypto.randomBytes(32).toString("hex"); }, // Helper method to generate new identifier generateCcToken: function () { // Helper method to generate private key jwt let privateKey = this.ccPrivateKey || fs.readFileSync(this.ccPrivateKeyFile); let signingOptions = { algorithm: "RS256", expiresIn: "5m", audience: this.getTokenEndpoint(), issuer: this.oktaClientId, subject: this.oktaClientId, }; return jwt.sign({ jti: this.getNewJti() }, privateKey, signingOptions); }, tokenRequest: function (ccToken) { // generate token request using client_credentials grant type return fetch(this.getTokenEndpoint(), { method: "POST", headers: { Accept: "application/json", "Content-Type": "application/x-www-form-urlencoded", }, body: new URLSearchParams({ grant_type: "client_credentials", scope: this.oktaScopes, client_assertion_type: "urn:ietf:params:oauth:client-assertion-type:jwt-bearer", client_assertion: ccToken, }), }); }, }; const oktaService = { authenticate: async function () { // Use to authenticate and generate access token if (!oktaHelper.accessToken) { console.log("Valid access token not found. Retrieving new token...\n"); let ccToken = oktaHelper.generateCcToken(); console.log(`Using Private Key JWT: ${ccToken}\n`); console.log(`Making token call to ${oktaHelper.getTokenEndpoint()}`); let tokenResp = await oktaHelper.tokenRequest(ccToken); let respBody = await tokenResp.json(); oktaHelper.accessToken = respBody["access_token"]; console.log( `Successfully retrieved access token: ${oktaHelper.accessToken}\n` ); } return oktaHelper.accessToken; }, managementApiCall: function (relativeUri, httpMethod, headers, body) { // Construct Okta management API calls let uri = `${oktaHelper.oktaDomain}${relativeUri}`; let reqHeaders = { Accept: "application/json", Authorization: `Bearer ${oktaHelper.accessToken}`, ...headers, }; return fetch(uri, { method: httpMethod, headers: reqHeaders, body, }); }, }; module.exports = oktaService;

Add a new file named app.js in the project root folder. This is the entry point for running our Node.js service application. In this file, we’ll do the following:

Import oktaService Create an async wrapper to execute asynchronous code Authenticate to Okta by calling oktaService.authenticate() Validate the previous step by listing users using a GET call to Okta’s /api/v1/users endpoint

Paste the following code into the app.js file:

const oktaService = require('./oktaService.js'); (async () => { await oktaService.authenticate(); let usersResp = await oktaService.managementApiCall('/api/v1/users', 'GET'); if(usersResp.status == 200) { let respBody = await usersResp.json(); console.log(`Users List: ${JSON.stringify(respBody)}\n`); } else { console.log('API error', usersResp); } })();

Next, update this as the entry point. In the package.json file, update the scripts property with the following:

"scripts": { "start": "node app.js" }

This gives us an easy way to run the app. Run the app using npm start. You should see a list of console logs:

Valid access token not found. Retrieving new token... Using Private Key JWT: eyJh........ Making token call to https://........../oauth2/v1/token Successfully retrieved access token: eyJ.................. Users List: [.........]

If you receive any errors, this is a good time to troubleshoot and resolve issues before adding DPoP.

Secure access tokens by adding DPoP to the Node.js service

Why isn’t OAuth 2.0 client credential flow enough?

Our setup used the client_credentials grant type to authenticate and get an access token. If someone gets hold of the private_key_jwt, they cannot replay it beyond expiration (I reduced it to 5 minutes to shorten this window). However, if someone gets ahold of the access token, they can use it for up to 1 hour, which is the default expiration time of an access token.

Constraining the token sender is one way to make the access token more secure. How can you do that? By adding the Demonstrating Proof of Possession (DPoP) OAuth extension method to the access token interaction. The technique adds a sender-generated token for each call it makes. Doing so prevents replay attacks even before tokens expire since each call needs a fresh DPoP token. Here is the detailed flow:

You’ll enable DPoP in Okta application settings to experiment with sender-constrained tokens. Open the Okta Admin Console in your browser and navigate to Application > Application to see the list of Okta applications in your Okta account. Open the service application to edit it.

In your service app’s General Settings section, change Proof of possession > Require Demonstrating Proof of Possession (DPoP) header in token requests to true. Then click Save.

You need a new public/private key pair to sign the DPoP proof JWT. If you know how to generate one, feel free to skip this step. I used the following steps to generate it:

Go to JWK generator Select the following and then click Generate. Key Use: Signature Algorithm: RS256 Key ID: SHA-256 Show X.509: Yes Copy the Public Key (JSON format) and save it to assets/dpop_public_key.json Copy the Private Key (X.509 PEM format) (Do not click Copy to Clipboard. This will copy as a single line, which will not work with the following steps. Instead, copy the value manually and save it) and save it to assets/dpop_private_key.pem

Now that you have a new keypair for DPoP, you’ll add the variables to the project. In the .env file, add the new file paths:

.... OKTA_SCOPES=okta.users.read OKTA_CC_PRIVATE_KEY_FILE=./assets/cc_private_key.pem OKTA_DPOP_PRIVATE_KEY_FILE=./assets/dpop_private_key.pem OKTA_DPOP_PUBLIC_KEY_FILE=./assets/dpop_public_key.json

Add the DPoP-related code to oktaService.js. Add the key files to config. We can use this while adding DPoP to our methods:

const oktaHelper = { ....... ccPrivateKeyFile: process.env.OKTA_CC_PRIVATE_KEY_FILE || '', // Private Key for signing Private key JWT ccPrivateKey: null, // Add this code ====================== dpopPrivateKeyFile: process.env.OKTA_DPOP_PRIVATE_KEY_FILE || '', // Private key for signing DPoP proof JWT dpopPublicKeyFile: process.env.OKTA_DPOP_PUBLIC_KEY_FILE || '', // Public key for signing DPoP proof JWT dpopPrivateKey: null, dpopPublicKey: null, // Add above code ====================== accessToken: '', ..... }

Add a helper method to generate a DPoP value. This helper method adds an access token to the DPoP proof JWT header. It’ll construct the JWT based on the format defined in spec.

const oktaHelper = { ..... // Add this as the last attribute of oktaHelper object generateDpopToken: function(htm, htu, additionalClaims) { let privateKey = this.dpopPrivateKey || fs.readFileSync(this.dpopPrivateKeyFile); let publicKey = this.dpopPublicKey || fs.readFileSync(this.dpopPublicKeyFile) let signingOptions = { algorithm: 'RS256', expiresIn: '5m', header: { typ: 'dpop+jwt', alg: 'RS256', jwk: JSON.parse(publicKey) } }; let payload = { ...additionalClaims, htu, htm, jti: this.getNewJti() }; return jwt.sign(payload, privateKey, signingOptions); } };

Next, add the DPoP proof token to the tokenRequest method. This method gets the newly generated DPoP proof token and adds it to the token request as a header.

// Add dpopToken as a new parameter tokenRequest: function(ccToken, dpopToken) { // generate token request using client_credentials grant type return fetch(this.getTokenEndpoint(), { method: 'POST', headers: { Accept: 'application/json', 'Content-Type': 'application/x-www-form-urlencoded', // New Code - Start DPoP: dpopToken // New Code - End }, ... }); },

Add the following steps to the authenticate method to add DPoP.

Generate a new DPoP proof for POST method and token endpoint Make token call with both private_key_jwt and DPoP jwt Okta adds an extra security measure by adding a nonce to token requests requiring DPoP. This will respond to token requests that don’t include a nonce with the use_dpop_nonce error. Read more about the nonce in the spec. After this step, we’ll generate a new DPoP proof JWT including nonce value in payload Make the token call again with this new JWT

Once we follow these steps, we’ll have a new access token to use in our API call. Let’s implement the steps. Update the authenticate method to the following:

authenticate: async function() { // Use to authenticate and generate access token if(!oktaHelper.accessToken) { console.log('Valid access token not found. Retrieving new token...\n'); let ccToken = oktaHelper.generateCcToken(); console.log(`Using Private Key JWT: ${ccToken}\n`); // New Code - Start let dpopToken = oktaHelper.generateDpopToken('POST', oktaHelper.getTokenEndpoint()); console.log(`Using DPoP proof: ${dpopToken}\n`); // New Code - End console.log(`Making token call to ${oktaHelper.getTokenEndpoint()}`); // Update following line by adding dpopToken parameter let tokenResp = await oktaHelper.tokenRequest(ccToken, dpopToken); let respBody = await tokenResp.json(); // New Code - Start if(tokenResp.status != 400 || (respBody && respBody.error != 'use_dpop_nonce')) { console.log('Authentication Failed'); console.log(respBody); return null; } let dpopNonce = tokenResp.headers.get('dpop-nonce'); console.log(`Token call failed with nonce error \n`); dpopToken = oktaHelper.generateDpopToken('POST', oktaHelper.getTokenEndpoint(), {nonce: dpopNonce}); ccToken = oktaHelper.generateCcToken(); console.log(`Retrying token call to ${oktaHelper.getTokenEndpoint()} with DPoP nonce ${dpopNonce}`); tokenResp = await oktaHelper.tokenRequest(ccToken, dpopToken); respBody = await tokenResp.json(); // New Code - End oktaHelper.accessToken = respBody['access_token']; console.log(`Successfully retrieved access token: ${oktaHelper.accessToken}\n`); } return oktaHelper.accessToken; }

Before proceeding, make sure to enable DPoP in your Okta service application. Now, test the steps by running npm start in the terminal. OOPS! You would have received an access token, but a call to the user’s API failed with a 400 status. We didn’t include the DPoP proof in this API call. With DPoP enabled, we must include a new DPoP proof for every call. This prevents malicious actors from reusing stolen access tokens.

Let’s add some code to include DPoP proof during every API call.

In the oktaService.js file, add a helper method to generate the hash of the access token or ath value. You’ll use this value later to bind access tokens with DPoP proofs:

const oktaHelper = { ....., // Add as the last attribute of oktaHelper object generateAth: function(token) { return crypto.createHash('sha256').update(token).digest('base64').replace(/\//g, '_').replace(/\+/g, '-').replace(/\=/g, ''); } };

A valid DPoP proof JWT includes the access token hash (ath) value. To make this change, update managementApiCall method

managementApiCall: function (relativeUri, httpMethod, headers, body) { // Construct Okta management API calls let uri = `${oktaHelper.oktaDomain}${relativeUri}`; // New Code - Start let ath = oktaHelper.generateAth(oktaHelper.accessToken); let dpopToken = oktaHelper.generateDpopToken(httpMethod, uri, {ath}); // New Code - End // Update reqHeaders object let reqHeaders = { 'Accept': 'application/json', 'Authorization': `DPoP ${oktaHelper.accessToken}`, 'DPoP': dpopToken, ...headers }; return fetch(uri, { method: httpMethod, headers: reqHeaders, body }); }

Run npm start. Voila! You see a list of users!

We successfully authenticated to Okta with a service app demonstrating DPoP and are using this access token and DPoP proof to access Okta Admin Management API endpoints.

Experiment with DPoP and API scopes for Okta API and custom resource server calls

You can download the completed project from the GitHub repository.

Try modifying the project using different Okta API scopes and experimenting with other endpoints. Ensure you give permissions to your service app by assigning appropriate Admin roles. To improve security, you can implement similar protection to your custom resource server endpoints using a custom authorization server and custom set of scopes.

Learn more about Okta Management API, DPoP, and OAuth 2.0

In this post, you accessed Okta management API using a node app and were able to make it more secure by adding DPoP support. I hope you enjoyed it! If you want to learn more about the ways you can incorporate authentication and authorization security in your apps, you might want to check out these resources:

Elevate Access Token Security by Demonstrating Proof-of-Possession Okta Management API reference OAuth 2.0 and OpenID Connect overview Implement OAuth for Okta Configure OAuth 2.0 Demonstrating Proof-of-Possession

Remember to follow us on Twitter and subscribe to our YouTube channel for more exciting content. We also want to hear from you about topics you want to see and questions you may have. Leave us a comment below!

Tuesday, 22. October 2024

TBD on Dev.to

How Verifiable Credentials Can Help Combat Fake Online Reviews

The Federal Trade Commission’s (FTC) has introduced a new rule banning fake online reviews. This rule, which penalizes businesses and individuals involved in the sale or purchase of fake reviews, represents a much needed step in promoting trust online. But while enforcement is crucial, there's still a challenge of identifying which reviews are legitimate. This is where Verifiable Credentials (VCs)

The Federal Trade Commission’s (FTC) has introduced a new rule banning fake online reviews. This rule, which penalizes businesses and individuals involved in the sale or purchase of fake reviews, represents a much needed step in promoting trust online. But while enforcement is crucial, there's still a challenge of identifying which reviews are legitimate. This is where Verifiable Credentials (VCs) can provide a solution.

The Problem with Fake Reviews

Fake reviews have been an issue for years, distorting consumer choices. As FTC Chair Lina Khan pointed out, these reviews “pollute the marketplace and divert business away from honest competitors.”

Let's face it, with pretty much every social networking site turning into an online shopping mall, many purchasing decisions are influenced by online reviews. I, personally, do most of my shopping online and I rely very heavily on reviews. I often question which ones are actually real, and if I'm not sure, I often shy away from making the purchase at all - which isn't good for me or the business.

The new FTC rule aims to crack down on this problem by prohibiting reviews from people who don’t exist, those who have no real experience with the product, or those misrepresenting their experience. It also bans businesses from creating or selling fake reviews. While these measures are great, enforcing them effectively presents challenges, especially with AI-generated content.

How Can Verifiable Credentials Help

Verifiable Credentials are digital certificates that prove specific facts about an individual or entity. These credentials are cryptographically signed, making them tamper-proof, and they can be independently verified without relying on a central authority. In the context of online reviews, VCs can establish authenticity.

Here’s how it could work:

When a customer purchases a product or uses a service, the business can issue a VC to confirm their legitimate experience. This can even be attached to their receipt. This credential could serve as proof that the individual has transacted with the business, preventing fake reviews from people with no real experience.

A review platform could require users to attach a VC that verifies they have purchased or used the product before submitting a review. This would eliminate fake reviews from non-customers and ensure that only those with firsthand experience can provide feedback.

Since VCs are cryptographically signed, they cannot be altered or faked. This ensures the integrity of the review content and prevents businesses from modifying or fabricating reviews to boost their reputations.

Benefits for Businesses and Consumers

Consumers would no longer have to second guess the authenticity of reviews. They can trust that every review is tied to a verified, real customer.

By adopting VCs, businesses can ensure they remain compliant with the new FTC rule. They would have a verifiable record that shows they only accept reviews from legitimate customers, protecting themselves from potential penalties.

Early adopters of VCs in their review systems could set themselves apart from competitors. Businesses that champion transparency and fairness by using VCs can build stronger relationships with their customers, enhancing brand loyalty.

Using VCs can automate the verification process, reducing the need for manual review moderation. As AI continues to be used in generating content, including reviews, this automation is key to keeping platforms efficient while maintaining high standards of trust.

Trust But Verify

The FTC’s new rule is a step in the right direction, but to truly tackle the problem of fake reviews, the marketplace needs more than just enforcement. It needs technology that ensures transparency and trust. Verifiable Credentials can provide that assurance, giving businesses and consumers the tools they need to foster a fair, competitive, and honest marketplace. As online commerce continues to grow, adopting VCs could be the key to making reviews a trustworthy resource once again.

If you'd like to get started with Verifiable Credentials, check out our free, open source SDKs!


Indicio

Introducing Indicio Proven Auth — easier, faster, more secure identity access management with Verifiable Credentials

The post Introducing Indicio Proven Auth — easier, faster, more secure identity access management with Verifiable Credentials appeared first on Indicio.
Indicio Proven Auth allows you to quickly configure single sign-on (SSO) so that your customers or end users can login with portable digital identities instead of usernames and passwords.

By Trevor Butterworth

With Gartner Research predicting a massive shift towards decentralized digital identity and verifiable claims, Indicio has launched a simple, powerful solution for any business or organization to benefit from using Verifiable Credentials — Indicio Proven® Auth.

Proven Auth allows you to quickly configure Single Sign-on (SSO) so that your customers or end users can use a Verifiable Credential to login to applications and websites instead of usernames and passwords. This means:

Replacing weak passwords and weak second-factor authentication for better security. No tracking by centralized third-party identity providers. No worries if a federated identity provider goes dark. Reduce steps for authentication in a zero-trust architecture model. Simpler, more secure user experience. Take advantage of the portable digital identity transformation in the European Union (eIDAS, EUDI), the travel sector, and in mobile driver’s licenses.

Unlock the feature-rich technology driving digital transformation

The improved workflow, privacy, and security are enough to justify making the switch — but there’s a lot more feature-rich power to using  Verifiable Credentials for SSO and identity access management.

Get all these features faster and cheaper than conventional identity access management solutions. Comes with Keycloak for identity access management, but is easily configurable to use other software. Combine popular protocols  (e.g. OIDC, SAML) with widely-used policy engines (such as Amazon Verifiable Permissions or Abacus) for role- or user-based authorization decisions based on the attributes of a Verifiable Credential. Unlike conventional identity provision, Proven Auth enables systems to allow access based on credentials they have never seen before provided they trust the source (e.g., government-issued ID).   Credentials can be quickly configured to handle complex information flows, making it easier to implement least-privilege access for zero trust. Verifiable Credentials go beyond the limits of passkeys, do not need to be enrolled,  and they are able to hold contextually useful information that can be shared by consent (simplifying compliance).

How Indicio Proven Auth delivers next-gen SSO, privacy, security, and user experience. 

Conventional SSO requires you to use a third-party identity provider to authenticate access to multiple applications and services. While this saves you from entering a password and username for each session with each service, it still means relying on a subscription to a third-party identity provider and cumbersome password rotation, which add additional expense and unnecessary complexity to the user experience.

For example: if an employer issues a Verifiable Credential to an employee, the employer can be certain it’s their employee accessing an application or system rather than simply trust an outside identity provider. The employee doesn’t need to use or rotate passwords, their access into the company’s systems cannot be stolen or phished, and third-party identity providers aren’t able to track employee login behavior.

Seamless SaaS access

Verifiable Credentials use advanced cryptography for instant, seamless authentication. You can be certain of the source of the credential, you can be certain that it is bound to the person or organization it has been issued to, and you can be certain that the data inside has not been altered.

SaaS applications can be quickly configured to accept a Verifiable Credential instead of a third-party identity provider. All you need to do is issue a Verifiable Credential or decide which Verifiable Credential issuers are valid for accessing your system. When logging into an application, Proven Auth checks to see if the credential issuer is valid and provides the destination system with the necessary data about who you are and what you should have access to. Proven Auth doesn’t need to have seen your credential before to do this.

Combine SSO with secure biometric authentication
For critical security access, Verifiable Credentials are a powerful way to implement biometric access, as a liveness check can be accompanied by the presentation of a biometric template bound to a credential and both compared for instantaneous authentication.

Do more for less

Compared with current approaches to managing identity, privacy, and security, Gartner’s Market Report notes that decentralized identity and Verifiable Credentials  represent “magnitudes of improvement in terms of efficiency, cost and assurance.”

To see how Indicio Proven Auth can transform your identity access management and prepare you to take advantage of a decentralized world, why not book a demo and learn how  Indicio is deploying Verifiable Credential solutions across different sectors for seamless trust.

To learn more about Indicio Proven Auth and verifiable credentials, contact us or visit us at indicio.tech/proven-auth/

###

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post Introducing Indicio Proven Auth — easier, faster, more secure identity access management with Verifiable Credentials appeared first on Indicio.


TBD

How Verifiable Credentials Can Help Combat Fake Online Reviews

With the FTC’s new rule banning fraudulent endorsements, Verifiable Credentials can combat fake online reviews.

The Federal Trade Commission’s (FTC) has introduced a new rule banning fake online reviews. This rule, which penalizes businesses and individuals involved in the sale or purchase of fake reviews, represents a much needed step in promoting trust online. But while enforcement is crucial, there's still a challenge of identifying which reviews are legitimate. This is where Verifiable Credentials (VCs) can provide a solution.

The Problem with Fake Reviews

Fake reviews have been an issue for years, distorting consumer choices. As FTC Chair Lina Khan pointed out, these reviews “pollute the marketplace and divert business away from honest competitors.”

Let's face it, with pretty much every social networking site turning into an online shopping mall, many purchasing decisions are influenced by online reviews. I, personally, do most of my shopping online and I rely very heavily on reviews. I often question which ones are actually real, and if I'm not sure, I often shy away from making the purchase at all - which isn't good for me or the business.

The new FTC rule aims to crack down on this problem by prohibiting reviews from people who don’t exist, those who have no real experience with the product, or those misrepresenting their experience. It also bans businesses from creating or selling fake reviews. While these measures are great, enforcing them effectively presents challenges, especially with AI-generated content.

How Can Verifiable Credentials Help

Verifiable Credentials are digital certificates that prove specific facts about an individual or entity. These credentials are cryptographically signed, making them tamper-proof, and they can be independently verified without relying on a central authority. In the context of online reviews, VCs can establish authenticity.

Here’s how it could work:

When a customer purchases a product or uses a service, the business can issue a VC to confirm their legitimate experience. This can even be attached to their receipt. This credential could serve as proof that the individual has transacted with the business, preventing fake reviews from people with no real experience.

A review platform could require users to attach a VC that verifies they have purchased or used the product before submitting a review. This would eliminate fake reviews from non-customers and ensure that only those with firsthand experience can provide feedback.

Since VCs are cryptographically signed, they cannot be altered or faked. This ensures the integrity of the review content and prevents businesses from modifying or fabricating reviews to boost their reputations.

Benefits for Businesses and Consumers

Consumers would no longer have to second guess the authenticity of reviews. They can trust that every review is tied to a verified, real customer.

By adopting VCs, businesses can ensure they remain compliant with the new FTC rule. They would have a verifiable record that shows they only accept reviews from legitimate customers, protecting themselves from potential penalties.

Early adopters of VCs in their review systems could set themselves apart from competitors. Businesses that champion transparency and fairness by using VCs can build stronger relationships with their customers, enhancing brand loyalty.

Using VCs can automate the verification process, reducing the need for manual review moderation. As AI continues to be used in generating content, including reviews, this automation is key to keeping platforms efficient while maintaining high standards of trust.

Trust But Verify

The FTC’s new rule is a step in the right direction, but to truly tackle the problem of fake reviews, the marketplace needs more than just enforcement. It needs technology that ensures transparency and trust. Verifiable Credentials can provide that assurance, giving businesses and consumers the tools they need to foster a fair, competitive, and honest marketplace. As online commerce continues to grow, adopting VCs could be the key to making reviews a trustworthy resource once again.

If you'd like to get started with Verifiable Credentials, check out our free, open source SDKs!

Monday, 21. October 2024

Indicio

Special Indicio Network promotion celebrating the open-source community

The post Special Indicio Network promotion celebrating the open-source community appeared first on Indicio.
Realize the benefits of your decentralized identity and Verifiable Credential products with the Indicio Network. The world’s only enterprise-grade network for delivering fast and powerful decentralized identity solutions built on Hyperledger Indy.

Gartner predicts that “by 2026, at least 500 million smartphone users will be regularly making verifiable claims using a digital identity wallet built on distributed ledger technology,” and Indicio is leading the way with our global network and suite of industry solutions. 

To celebrate the Linux Foundation Decentralized Trust Member Summit, we’re offering up to 50% off  the Indicio Network transaction endorser writing packages for new customers and node operators— giving you the perfect opportunity to deploy on a decentralized network built and managed to deliver the highest quality performance for enterprise grade solutions at any scale. 

Why Choose the Indicio Network?

The Indicio Network is built on Hyperledger Indy and designed to support a wide range of Verifiable Credential implementations at any scale. It provides a stable home for your solutions no matter your use case, industry, or organization type. As the world shifts to decentralized identity models, the Indicio Network stands out with its advanced support, ease of use, and strong community support. Here’s why many have found success using the Indicio Network:

Privacy-first by design

The Indicio Indy Network makes it possible for governments, enterprises, and organizations to create advanced data-sharing systems built on decentralized identity technologies. Designed and managed to the highest standard of privacy and security, no personal data is written to the public ledger. Ever. This eliminates the need for complicated encryption techniques, drastically reducing the risk of breaches and misuse.

Proven security and scalability

Indicio’s network is operated by nearly two-dozen companies and organizations from around the world, ensuring a robust, resilient platform for the public DIDs that support credential issuance. Credentials that are issued and verified using the Indicio Network operate at lightning speed, no matter the scale.

Seamless interoperability  

The Indicio Network is built for flexibility. Its decentralized architecture built on Hyperledger Indy ensures that you can seamlessly connect and interact across different verifiable credential ecosystems, ensuring interoperability across global markets. Whether you’re in finance, healthcare, education or travel, the Indicio Network allows you to layer verifiable credentials into your existing systems.

Open-source innovation  

Backed by the vibrant Hyperledger community and other open source and open standards projects and bodies, the Indicio Network is continually expanding. Open-source contributions ensure that the network is at the cutting edge of innovation, with developers and businesses working together to build and refine decentralized identity solutions. 

Compliance-ready  

With data privacy regulations like GDPR and EIDAS, data compliance is critical. The Indicio Network ensures you meet these regulations by enabling decentralized verification methods that don’t require collecting or storing personal information — making it easier to meet compliance standards without sacrificing user experience.

There’s no better time to join the Indicio Network

At Indicio, we believe in putting open-source technology at the center of the growing decentralized identity market. That’s why we’re offering up to 50% in honor of the Linux Foundation Decentralized Trust Member Summit — giving our friends across the open-source community a cost-effective way to explore the benefits of our professional network.

Whether you’re looking to create secure authentication systems, streamline reusable Know-Your-Customer (KYC) processes, or build privacy-respecting identity ecosystems, the Indicio Network provides the infrastructure and support you need to scale and succeed.

Don’t miss out—activate your discount today and take advantage of this exclusive offer*. Let’s build the future of decentralized identity together!

* Offer expires on November, 30 2024, new customers and node operators only.

To learn more about Indicio and verifiable credentials, contact us or visit us at Indicio.tech

 

###

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post Special Indicio Network promotion celebrating the open-source community appeared first on Indicio.


IdRamp

Healthcare Account Recovery: Identity Verification with MS Entra ID

Healthcare organizations are facing a cyberattack epidemic, with account takeover attack (ATO) incidents surging at an alarming rate. The post Healthcare Account Recovery: Identity Verification with MS Entra ID first appeared on Identity Verification Orchestration.

Healthcare organizations are facing a cyberattack epidemic, with account takeover attack (ATO) incidents surging at an alarming rate.

The post Healthcare Account Recovery: Identity Verification with MS Entra ID first appeared on Identity Verification Orchestration.

Tokeny Solutions

Institutional Tokenization 3.0: Break Silos

The post Institutional Tokenization 3.0: Break Silos appeared first on Tokeny.
October 2024 Institutional Tokenization 3.0: Break Silos

Since Tokeny started building tokenization solutions in 2017, we have seen financial institutions exploring tokenization of assets in many different ways. The evolution has unfolded in three main phases, each addressing the limitations of the previous one.

Tokenization 1.0 – Permissioned networks

Initially, institutions turned to permissioned blockchains for tokenizing assets, prioritizing full control over the network to control tokens. However, this approach quickly exposed limitations in terms of scalability and interoperability, making it difficult for tokens to interact with external applications.

Tokenization 2.0 – ERC-20 Permissionless Tokens + Wallet Whitelists

To solve these issues, institutions started to turn to ERC-20 tokens with wallet whitelists on public blockchains. While this allowed some control over the distribution of tokens, it introduced new compliance and scalability problems: wallets are not linked to identities on the blockchain, so onchain ownership records became unreliable.

It made cross-platform distribution and onchain settlement complex from a compliance perspective because compliance and transfer rules were enforced off-chain, keeping tokens confined within a single platform.

Tokenization 3.0 – ERC-3643, Identity-Based Permissioned Tokens

The open sourcing of ERC-3643 in 2022 introduced identity-based permissioned tokens, ensuring that ownership and compliance remain reliable in every situation.

Interoperability, Not Competition: A common misconception is that using ERC-3643 means competing with other token standards. However, ERC-3643 is built for interoperability within the blockchain ecosystem. By being fully compatible with ERC-20 tokens, assets tokenized using ERC-3643 can integrate seamlessly with existing wallets, DeFi platforms, and analytics tools. Modular and Composable: ERC-3643 allows projects to start with a flexible compliance framework and expand its functionality through composability. This modular approach enables projects to combine additional smart contracts (e.g., automated capital calls, dividend allocation, …) to meet specific needs. Eliminating Single Points of Failure: ERC-3643 links token ownership to identity addresses instead of wallets. Authorized parties validate an investor’s eligibility and issue proofs onchain to their identity addresses, ensuring that compliance and ownership records are always tied to a verified identity. Wallets and platforms won’t be the single points of failure as the ownership remains secure and verifiable through the onchain identity. Issuer Control, Not Platform Lock-In: As transfer rules are enforced onchain, issuers and their appointed agents maintain control of their tokens. They always have real-time insights on who owns what without relying on distributors’ sub-ledgers. With this approach, issuers are no longer restricted to tokenization platform silos. They represent their assets onchain, appoint agents, and activate distribution channels.

ERC-3643 paves the way for breaking tokenization silos, enabling cross-smart contracts and cross-platform interoperability. Alongside 78 industry leaders, we proudly support the non-profit ERC3643 Association in improving this open market standard. Together, we can drive lasting impact through collaboration.

So, what do you think Tokenization 4.0 will look like?

Tokeny Spotlight

PARTNERSHIP

Tokeny is integrating Chainlink Labs infrastructure within our solutions.

Read More

INTERVIEW

 CCO, Daniel Coheur interview at the iconic NYSE to discus buy-side trends.

Read More

EVENT

Attended DAW with one clear message: The strong need for an interoperable ecosystem built on shared standards.

Read More

PARTNERSHIP

Tokeny partners with AMA-AMBIOGEO to tokenize $4.6 Billion Gold Reserves.

Read More

PRODUCT NEWSLETTER

We discuss how our platform empowers fund servicers to act in onchain finance.

Read More

NEW TEAM MEMBER

Meet Jordi Reig our new Head of Engineering. Welcome to the team

Read More Tokeny Events

RWA Summit New York
October 22nd – 23rd, 2024 | 🇺🇸 USA

Register Now

Smartcon
October 30th – 31st, 2024 | 🇭🇰 Hong Kong

Register Now

Fintech Festival Singapore
November 6th – 8th, 2024 | 🇸🇬 Singapore

Register Now

The Digital Money Event
October 23rd, 2024 | 🇬🇧 United Kingdom

Register Now

Digital Assets Week Singapore
November 4th – 5th, 2024 | 🇸🇬 Singapore

Register Now ERC3643 Association Recap

Press Release

ERC3643 Association Leads RWA Tokenization Standardization with 78 Industry Leaders.

Learn more here

Subscribe Newsletter

A monthly newsletter designed to give you an overview of the key developments across the asset tokenization industry.

Previous Newsletter  Oct21 Institutional Tokenization 3.0: Break Silos October 2024 Institutional Tokenization 3.0: Break Silos Since Tokeny started building tokenization solutions in 2017, we have seen financial institutions exploring tokenization of assets in… Sep6 Amsterdam Teambuilding Fuels Our Mission for Open Finance May 2024 Amsterdam Teambuilding Fuels Our Mission for Open Finance Greetings from Amsterdam! We hope you had a wonderful summer holiday. Recently, our global team… Aug1 Transaction Privacy: The Last Blocker for Massive Open Finance Adoption July 2024 Transaction Privacy: The Last Blocker for Massive Open Finance Adoption Open finance is a new approach to financial services, characterized by decentralization, open… Jun28 Tokenized Securities Unaffected by MiCA, Utility Tokens and Stablecoins Face Stricter Rules June 2024 Tokenized Securities Unaffected by MiCA, Utility Tokens and Stablecoins Face Stricter Rules As the EU’s Markets in Crypto Assets (MiCA) regulation is set…

The post Institutional Tokenization 3.0: Break Silos appeared first on Tokeny.


uquodo

Web 3.6.1 and Mobile SDK 3.2.0 updates

The post Web 3.6.1 and Mobile SDK 3.2.0 updates appeared first on uqudo.

The post Web 3.6.1 and Mobile SDK 3.2.0 updates appeared first on uqudo.


Spherical Cow Consulting

The Importance of Digital Identity Wallet Standards

Digital identity wallets are too important to be treated as just an app, with only your favorite app store’s guidelines. I also think it’s too important to solely rely on a mess of government guidelines written with varying degrees of clarity. We already have so many questions, like: Answers in Regulation? I wrote a post… Continue reading The Importance of Digital Identity Wallet Standards The p

Digital identity wallets are too important to be treated as just an app, with only your favorite app store’s guidelines. I also think it’s too important to solely rely on a mess of government guidelines written with varying degrees of clarity.

We already have so many questions, like:

How much should a wallet know about the credentials it holds? How should it make queries to access only the required information? How can users find out which credentials are stored in each wallet, and how should wallets communicate what they contain? Should wallets be able to interoperate to help users find and share entire credentials or just specific details? Answers in Regulation?

I wrote a post about the EU’s Digital Identity Architecture Reference Framework (ARF) a few months ago. The ARF is probably the best source of guidance right now, given its comprehensive approach and the level of collaboration involved. It outlines the expected behaviors of a digital identity wallet, with its development and use supported by the eIDAS 2.0 regulation.

“The ARF is an outline that provides the first blush of a framework for how digital wallets will work in the EU. The European Commission kicked off the work through a Commission Recommendation from June 2021 that urged Member States to develop common standards, technical specifications, and best practices in response to the eIDAS 2.0 regulation. EU Member States sent their experts to join a collaborative process to build the framework.” – The EU Digital Identity Architecture Reference Framework – How to Get There From Here

All that said, ARFs are not specifications. They describe a design, but the details of the implementation, such as what, when, and how different protocols must be used, are left open to interpretation. ARFs lay the groundwork for building specific technical standards, guiding more detailed development. Incredibly helpful, but not enough by itself to help ensure clarity and interoperability.

Answers in Open-Source Libraries

OK, so it’s good that there is a reference framework under development. For that matter, there is also at least one effort to build and share code libraries that will support the development of digital identity wallets: the Open Wallet Foundation.

From their website, “The OWF aims to set best practices for digital wallet technology through collaboration on standards-based OSS components that issuers, wallet providers and relying parties can use to bootstrap implementations that preserve user choice, security and privacy.”

They’ve also recently published a “Wallet Safety Guide” that provides guidelines to developers on safe ways to implement a wallet. The guide offers four areas of focus, or ‘pillars’: Privacy, Security, Supporting Functions, and Governance. It is an incredibly helpful guide and, when combined with the code libraries their members are developing and making available, takes developers one step closer to creating a wallet that is fit-for-purpose and safe for users.

But, similar to an ARF, these aren’t specifications. There are still low-level details that need clarity. Do we have the right protocols that allow all the components of a digital identity wallet to share, in a controlled manner, data about itself and the credentials it contains?

Answers in Specifications

But surely there must be something in the standards world that applies to wallets! Well, sort of. There are quite a few specifications about credentials and their properties (for example, OpenID for Verifiable Presentation that focuses on presenting identity data securely, or the W3C’s Verifiable Credentials Data Model that defines a standard for digital credentials, and of course the work in the IETF SPICE working group.) Some of these are approved standards, some are works in progress.

One of the works in progress is the Digital Credentials API within the W3C. This API aims to create a standardized way for web browsers, wallets, and verifiers to interact, ensuring data privacy during credential exchanges. Tim Cappalli, perhaps better known for his championing of passkeys, created a great diagram that shows what specific step in the process the DC API work is focused on. It also shows where other specifications need to exist for the other steps.

Do Standards Really Matter?

So, there are frameworks. There are open-source libraries. There are credential specs and work to standardize APIs. Why isn’t that enough? Digital identity wallets are part of such a new field, surely building experience this way is a good thing!

That’s a perspective that a lot of people have, and in most situations I would agree with them. Building standards without understanding real-world use cases is an annoying academic exercise that can waste a lot of time. In this case, however, we’re talking about testing out these new ideas in a way that involve personal data. A LOT of personal data. We’re not going to get it right the first time around. So what does that mean for all that personal data? It means a high-probability of exposing that personal data to entities that shouldn’t have it.

Wrap Up

At the end of the closing keynote panel at Authenticate 2024, Andi Hindle asked “Are wallets going to be successful? Are they the right path forward?” My answer was “The question is irrelevant.” Yes, that’s a cheeky way of putting it, but digital identity wallets are already here. They are already being implemented. They are not going to go away. And they are introducing new threat vectors that we are hoping regulations will protect us from.

That’s a great model … if everyone and everything wants to abide by the law and agrees to interpret it in the same way. But for a world where that ideal is perhaps not the most realistic, having technical specifications that allow or prevent behavior in a very predictable fashion sure would be nice.

Regulation, like the EU’s eIDAS 2.0, and open-source efforts such as the Open Wallet Foundation are important steps in guiding digital identity wallets. However, we need to complement these efforts with detailed technical standards that ensure wallets operate predictably and securely. This layered approach—combining regulation, open-source libraries, and technical standards—can create a safer ecosystem for users.

So let’s get moving.

Reach out if you want to learn more about navigating this process or need support with standards development. With my experience across various SDOs, I’m here to help guide you through the complexities of Internet standards development.

The post The Importance of Digital Identity Wallet Standards appeared first on Spherical Cow Consulting.


Ockto

Verwachtingen CCD2: duidelijkheid, een gelijk speelveld en proportionaliteit

Meer duidelijkheid, werkbare proportionaliteit-eisen en een gelijkwaardig speelveld voor alle partijen. Dat is wat er verwacht en gehoopt wordt van de aankomende Consumer Credit Directive 2 (CCD2). Deze Europese richtlijn verbreedt de scope van partijen die onder toezicht vallen en brengt veranderingen met zich mee voor kredietverstrekkers. Zij moeten zich voorbereiden op strengere rege

Meer duidelijkheid, werkbare proportionaliteit-eisen en een gelijkwaardig speelveld voor alle partijen. Dat is wat er verwacht en gehoopt wordt van de aankomende Consumer Credit Directive 2 (CCD2). Deze Europese richtlijn verbreedt de scope van partijen die onder toezicht vallen en brengt veranderingen met zich mee voor kredietverstrekkers. Zij moeten zich voorbereiden op strengere regels en nieuwe eisen.


KuppingerCole

Analyst's View: Endpoint Protection, Detection and Response (EPDR)

by John Tolbert In the rapidly evolving landscape of cybersecurity threats, Endpoint Protection Detection and Response (EPDR) solutions are, without a doubt, indispensable components of an organization's security architecture. EPDR solutions bridge this critical gap by integrating proactive endpoint protection with advanced detection and real-time response capabilities. This unified approach enabl

by John Tolbert

In the rapidly evolving landscape of cybersecurity threats, Endpoint Protection Detection and Response (EPDR) solutions are, without a doubt, indispensable components of an organization's security architecture. EPDR solutions bridge this critical gap by integrating proactive endpoint protection with advanced detection and real-time response capabilities. This unified approach enables organizations to not only prevent known malware infections but also to swiftly identify, analyze, and mitigate complex threats that can evade conventional defenses. As endpoints—ranging from desktops and laptops to mobile devices—serve as pivotal entry points for cyber adversaries, incorporating EPDR ensures comprehensive visibility and robust protection across the entire digital workspace, thereby fortifying the organization's overall security posture.

Lockstep

Back to the Future with Verifiable Credentials

I recently co-authored a white paper about verifiable credentials, with the founder and CEO of the exciting Australian start-up, Verified Orchestration. “Back to the Future — Revolutionising Digital ID with new technology and centuries old governance” (PDF) looks at how verifiable credentials enable whole ecosystems to digitise their established governance structures, contexts and rules, as... T

I recently co-authored a white paper about verifiable credentials, with the founder and CEO of the exciting Australian start-up, Verified Orchestration.

Back to the Future — Revolutionising Digital ID with new technology and centuries old governance” (PDF) looks at how verifiable credentials enable whole ecosystems to digitise their established governance structures, contexts and rules, as well as their transactions.

This blog is a lightly edited extract from our full paper.

Photo credit: Sean Bernard, Flickr (Creative Commons Licence). 

Verifiable Credentials are new again

Verifiable Credentials have been around for a long time. You have more than likely used Verifiable Credentials without knowing it. They are commonplace, embedded in mobile phones, payment cards, e-passports, smart phones and smart watches.

The mobile phone SIM is an early example and provides a perfect explainer. The Subscriber Identification Module is both a special purpose integrated circuit and an administrative record. The SIM holds an official copy of your account information and your unique international subscriber number, all of which is digitally signed by your phone company.

The SIM also holds a unique cryptographic key which is used by the handset to digitally sign (in simple terms, “mark”) the start and stop of every call you make. This signature is verifiable by network operators globally and allows them to know which subscriber is making which call from what location, anywhere in the world.

The global cell phone network could not function without Verifiable Credentials.

The same goes for global credit card payments. The EMV chip card system replaced magnetic stripe cards long ago, which we were vulnerable to skimming and counterfeiting. Instead of a magnetic stripe card storing and passively transferring cardholder data to a terminal, the chip card carries a Verifiable Credential holding the cardholder data, a cardholder key, and the signature (i.e. endorsement) of the bank which issued the card. Every payment made with the chip card is signed (marked) by the cardholder key, rendering it tamper resistant and globally reconcilable.

Verifiable Credentials are a technology that puts instrumental pieces of information about individuals into the hands of those individuals and empowers them to present that information directly, purposefully and securely.

Verifiable Credentials are decentralised in that the information they carry is valid on its face and can be presented directly, peer to peer, without intermediation.

Verifiable Credentials and the identity problem

While Verifiable Credentials have been used for decades, they have been reenergised lately to help solve digital identity. SIMs and EMV cards are highly specialised, dedicated to singular applications, with proprietary standards overseen by industry associations, and bound to physical chips. Today, Verifiable Credentials are being standardised by several global working groups, with a view to extended use cases and applications.

Why the shift to Verifiable Credentials? The way we handle most identity information online has historically followed a distinctly centralised pattern. Instead of putting identity information in the hands the holder, we tend to keep ostensibly official copies in different servers where it sits waiting to be exercised on the holder’s behalf.

To put their digital identity to use, the holder has to activate it on the server somehow (usually by quoting a plaintext username and password) triggering a cascade of actions in their name. Internet banking, online shopping, remote workflows, e-health, e-government travel booking, ticketing and so on all follow the same pattern.

Centralised identity management is odd compared with regular credentials. Imagine if we handled driver’s licenses in the same way as current online identity: the motor vehicle registry would ask you to give your license back to them, and in its place issue you a username and password to access it and release it whenever you happen to need it.

The online world has followed this unreal pattern ever since the “Identity Metasystem” was published in 2006, promoting the canonical arrangement where a Subject and a Relying Party deal with each other via a third-party Identity Provider.

The three-party model is entirely reasonable with respect to the way authoritative information about parties is sourced, however the Identity Metasystem also dictated that most interactions would draw down identity information in real time. That’s the odd part of digital identity.

The new wave of interest in Verifiable Credentials crystalised in July 2018 when the World Wide Web consortium (W3C) released the Verifiable Credentials Data Model 1.0 with the byline Expressing verifiable information on the Web.

[For some reason, subsequent iterations of the W3C VC Data Model dropped the mention of “verifiable information”. I thought that was the best thing in the specification.] Back to the future

Verifiable Credentials are a revolutionary digital technology, placing cryptographic keys under the sole control of the credential holder, making credentials highly resistant to theft, counterfeiting or takeover. The new wave of standards now allows customised Verifiable Credentials to be securely carried in mobile digital wallets and used in a range of business applications to reliably prove endorsed facts and figures in their specific contexts.

By decentralising the presentation of credentials, and conserving the established local rules that govern how they are issued and consumed, cryptographically Verifiable Credentials are far less disturbing to business processes than general purpose digital identities and the centralised presentation flows entailed by the Identity Metasystem.

The post Back to the Future with Verifiable Credentials appeared first on Lockstep.


Datarella

Orchestration Systems in Track & Trust

This is the fourth article in a series of technical posts about how Track & Trust works at a component level. To begin with, we’ll outline how our orchestration systems, […] The post Orchestration Systems in Track & Trust appeared first on DATARELLA.

This is the fourth article in a series of technical posts about how Track & Trust works at a component level. To begin with, we’ll outline how our orchestration systems, real-time monitoring, and dashboards work together. Additionally, we’ll explore the challenges we faced and how we overcame them. Quick navigation links to follow-up articles will be provided at the bottom of each article once the series is complete.

Orchestration Systems and CI/CD

To manage a large fleet of custom-built mesh node devices, we needed to develop advanced orchestration systems. Specifically, these systems enable us to provision and manage devices efficiently. Furthermore, we created a special approach to real-time monitoring of node health in the field. As a result, Track & Trust includes a full suite of dashboards that we can now use to monitor key performance indicators and display the outputs of our Probabilistic 360° Supply Chain Tracking product. In addition the orchestration systems we built are now fully operational and enable a highly flexible approach to updating and managing the software deployed to our hardware in the field.  Let’s jump into how we accomplished this feat.

The Addressing Challenge

Most people aren’t aware of this but devices on 4g connections don’t have static IP addresses. The IP addresses are assigned by the constantly shifting cellular towers the mobile device connects to. This is a real problem if you want to set up a software pipeline to trigger updates on mobile or iOT devices. In order to solve this we set up a virtual private network (VPN). This VPN is based on the open source wireguard protocol. Basically it’s a software defined network with tailscale under the hood. This approach means using a peer-to-peer mesh network to handle addressing devices inside our mesh network (pretty meta huh?). By routing our network traffic through a VPN we achieved much better security. On top of this we got static virtual addresses. This allowed us to name and manage the machines at the network level.

Push or Pull Orchestration Systems?

With the addressing problem solved another challenge popped up. If the machines are only online intermittently, a push approach to updates becomes impossible. This is because when you push the updates it might not reach all the machines. Some machines will inevitably be offline. The solution to this issue was to use a scheduled automation to automatically pull updates from an ansible automation engine. This, in turn, is controlled by a continuous integration and deployment system based around Semaphore. This enabled us to write code in an integrated development environment, push it to gitlab, and then trigger a build that the machines pick up. These builds, then deploy automatically on a daily basis whenever the machines happen to come online.

While we were still heavily in development, having this pipeline in place vastly increased our efficiency. We were able to write code and deploy to our custom made IoT hardware basically as though it was sitting in a cloud environment. On top of this we were able to designate groups of machines as dev machines and others as stage or prod machines.  This combination allowed us to develop and test both hardware and software independently of production and staging environments. It empowered us to rapidly iterate on the status quo without breaking hardware already in use in the field. Additionally the moment that we were ready to update mesh nodes in the field, we could earmark them to update themselves with well tested code the next time they came online.

Real-Time Monitoring

We needed advanced monitoring to easily update our software fleet. To achieve this, we set up an end-to-end observability pipeline using Fluentbit. This pipeline routed data in real-time from our mesh nodes into a database. Subsequently, we displayed real-time data in Grafana for management purposes. This approach enabled us to debug faster without having to SSH into a specific node to get its logs.

Finally, our Grafana dashboards showed us if all services were up and running, as well as key indicators of device health such as memory usage, temperature, and battery life. We could display logs in the timeframes we were interested in for the machine groups we wanted to monitor. In conclusion, this monitoring technology gave us valuable insights into ensuring our deployed hardware was working correctly and allowed us to fix issues quickly.

The Track & Trust dashboard with realtime information about each machine to optimize field operations

<<Previous Post

Next Post>>

 

The post Orchestration Systems in Track & Trust appeared first on DATARELLA.


KuppingerCole

Tackling AI-Driven Cyber Risks: A Look at New Security Regulations

by Prof. Dr. Dennis-Kenji Kipker As artificial intelligence continues to evolve, so do the cybersecurity challenges it brings. AI-enabled cyber threats are opening up new attack vectors, posing significant risks to organizations across industries. At cyberevolution 2024, Dennis-Kenji Kipker, Research Director at cyberintelligence.institute, will address these pressing concerns in his keynote. D

by Prof. Dr. Dennis-Kenji Kipker

As artificial intelligence continues to evolve, so do the cybersecurity challenges it brings. AI-enabled cyber threats are opening up new attack vectors, posing significant risks to organizations across industries. At cyberevolution 2024, Dennis-Kenji Kipker, Research Director at cyberintelligence.institute, will address these pressing concerns in his keynote.

Dennis will highlight the growing complexity of AI-related risks and explain why relying solely on regulations like the recently enacted EU AI Act won't be enough to protect against emerging threats. Instead, he advocates for a more proactive approach, using advanced AI-driven tools for continuous monitoring and threat detection. His session will explore how organizations can build resilient defenses by integrating AI technology into their cybersecurity strategies, while ensuring that new regulatory frameworks such as the NIS2 Directive and Cyber Resilience Act work in harmony with these technological advancements.

For professionals navigating the intersection of AI, cybersecurity, and regulation, Dennis' insights will be invaluable in understanding how to mitigate risks and stay ahead of potential threats. In the meantime, watch our interview with him to get insights into how these emerging regulations are designed to keep up with evolving cyber threats and ensure a safer digital future.

Sunday, 20. October 2024

KuppingerCole

Identity Management in a World of Automated Systems: Machine Identities

In this conversation, Matthias and Martin explore the concept of machine identities, discussing their significance in modern IT infrastructures. They discuss the challenges of managing these identities, the importance of lifecycle management, and the impact of regulations on cybersecurity. The conversation emphasizes the need for organizations to understand and properly manage machine identities t

In this conversation, Matthias and Martin explore the concept of machine identities, discussing their significance in modern IT infrastructures. They discuss the challenges of managing these identities, the importance of lifecycle management, and the impact of regulations on cybersecurity. The conversation emphasizes the need for organizations to understand and properly manage machine identities to ensure security and compliance in an increasingly complex digital landscape.



Friday, 18. October 2024

1Kosmos BlockID

MGM, Caesars Hacks: More of the Same Is Coming Your Way–But Here’s How to Stop It

Given the stunning success of the recent hacks at MGM and Caesars, it’s a safe bet what happened in Vegas won’t stay there for long. Even though technology to prevent such breaches is readily available, there’s every reason to believe large organizations in any number of sectors could soon face a rude awakening. Success breeds … Continued The post MGM, Caesars Hacks: More of the Same Is Coming Y

Given the stunning success of the recent hacks at MGM and Caesars, it’s a safe bet what happened in Vegas won’t stay there for long. Even though technology to prevent such breaches is readily available, there’s every reason to believe large organizations in any number of sectors could soon face a rude awakening.

Success breeds success, after all. It also inspires copycats. The attacks on Caesars Entertainment and MGM Resorts International in early September appear to have been perpetrated by a group of teenagers and young adults that employs simple social engineering techniques to infiltrate corporate systems for fun and serious profit.

Dubbed “Scattered Spider” by some security analysts and UNC3944 or “Muddled Libra” by others, the group of Gen-Z threat actors is believed to have pulled off a series of cryptocurrency heists before breaching and then extorting Western Digital and other technology firms over the past few years. Reuters reports the group has been implicated in 52 attacks spanning multiple industries worldwide since 2022.

Specifics in the casino breaches are still emerging. However, it appears that operatives in the MGM attack used LinkedIn profile information to impersonate a resort employee in “vishing” calls to an outsourced IT support vendor, requesting access to the employee’s corporate accounts after getting “accidentally” locked out. After gaining entry, the hackers gained super administrator rights to MGM’s Okta environment. They even configured a second identity provider to bypass multi-factor authentication (MFA) and impersonate highly privileged users within the corporate systems.

In a word: diabolical, especially for a group of suspected 17- to 22-year-olds. But as the Washington Post reports, Scattered Spider’s Vegas jackpot also represents a troubling new escalation in the group’s MO. The hackers threw the company into utter chaos by deploying crippling ransomware from notorious Russian cyber gang ALPHV into MGM’s systems. Ten days into the breach, MGM was still struggling to repair corporate email, restaurant reservation systems, hotel booking operations, slot machines, and digital keycard access at its Aria, Bellagio, and MGM Grand properties. There’s little reason to believe Scattered Spider isn’t already scouting new prey.

Gaming the System: Harvesting Passwords, Short-Circuiting MFA

Ransomware attacks are nothing new, of course. Last year, more than 620 million ransomware attacks worldwide cost victims more than $30 billion. According to Verizon’s 2023 Data Breach Investigations Report, 74% of breaches stem from credentials stolen through phishing, vishing, and SIM-swapping attacks.

Indeed, stolen passwords are implicated in up to $25 million in average losses suffered by a third of all businesses that have fallen victim to cyberattacks over the last 36 months. When an ATO leads to a data breach, it can mean an average additional cost of $9.5 million per incident for US-based companies. ATOs account for more than $300 million in losses annually. And as WAPO points out, Scattered Spider and its Eastern European business partners could worsen matters in coming weeks.

For one thing, you have financially motivated, English-speaking hackers with a proven talent for pulling off social engineering and data exfiltration schemes. Now add the Russian “ransomware-as-a-service” operatives believed to be behind the Colonial Pipeline attack and an underworld network as technologically sophisticated as any modern enterprise.

Mix in plentiful targets with outsourced IT support and call center operations crewed by untrained, often short-term employees vulnerable to vishing. And sprinkle in emerging, AI-powered phishing and vishing tactics and automated credentials-stuffing technologies. Put it together, and far too many organizations in health care, telecom, government, financial services, and others may be vulnerable to an emboldened Scattered Spider and copycat groups. The good news: Organizations can quickly deploy effective defenses. But they’d better move fast.

No More Rolling the Dice with Outdated Forms of MFA

According to a recent survey from Google and Ipsos, a successful data breach can erode customer trust by as much as 44%. As the MGM and Caesars breaches so vividly illustrate, legacy forms of multifactor authentication (MFA) won’t cut it anymore. Cybercriminal organizations like Scattered Spider have clearly developed inventive ways to acquire login credentials and circumvent things like one-time passcodes and limited biometric authentication systems designed to confirm the legitimate user is attempting to access their account.

The problem: Traditional forms of MFA are built around login passwords and a device instead of the identity of the person accessing an account. Even with Windows Hello for Business (WHfB) and Okta Verify Authenticator, anybody with administrative access can register things like user biometrics to any device they can access—or set up an alternative identity provider to bypass authentication measures altogether.

For some business applications, that may not be a significant risk. But it still leaves the door open to account compromise that puts IT and security teams in reactive mode against data breaches and ransomware after access to systems has already been granted. Fortunately, a new generation of strong, non-phishable biometric identity solutions is changing all that.

Enter: “Liveness”-based Biometric Authentication

With traditional forms of MFA becoming so unreliable as a means of identity verification, modern forms of biometric authentication are helping to set a new standard for security and convenience. Solutions certified to FIDO2 , iBeta biometrics-, and NIST 800-63-3 standards, for instance, use “live” biometric markers tied to a registered identity to provide reliable, strong authentication impervious to account takeover.

These modern biometric solutions offer machine-verified identity to government-issued credentials (driver’s license, state ID, passport, etc.) and enable non-phishable multi-factor authentication when users login to digital services.

1Kosmos, for instance, uses the private key of a matched public-private pair in the user’s device as a possession factor (ie, “what you have”), while a live facial scan becomes the “what you are” or inherent authentication element. To access a site, app, or system, a live image scan is compared to an image scan captured at the time of enrollment. If they match, the identity of the person of authentication is confirmed to be in fact, the authorized user—and not a bot, deep fake or imposter—with 99.9% accuracy.

This technology is widely available and supports a consistent onboarding and authentication experience into all apps, devices, systems, and environments—including existing privileged access management systems. Any organization can stop phishing, ransomware attacks, and data breaches before hackers can infiltrate accounts. Scattered Spider simply provided an urgent new reason to stop gambling with security now.

To learn more about 1Kosmos, the only NIST, FIDO2, and iBeta biometrics-certified platform on the market, click here.

The post MGM, Caesars Hacks: More of the Same Is Coming Your Way–But Here’s How to Stop It appeared first on 1Kosmos.


Spruce Systems

What’s the Difference Between a Physical ID Card and a Verifiable Digital Credential?

Individuals are starting to embrace digital identity by replacing physical wallets with smartphone-stored verifiable digital credentials, offering enhanced privacy, security, and convenience.

Gen Z are giving up their wallets – gladly. 

According to a recent New York Times report, teenagers and twentysomethings think wallets are “uncool.” Instead, they’re increasingly storing every payment method, document, or credential they need on their smartphones: credit cards, plane tickets, insurance cards, transit passes, driver’s licenses, and gym memberships.

Leaving home without a wallet might sound terrifying, but it’s quickly becoming the new normal. Digital driver’s licenses, now available in states like New York and California, are poised to spread nationwide. Businesses and agencies that don’t get up to speed on the new digital identity can risk being left behind: one 19-year-old told the Times that if a store doesn’t accept Apple Pay, she “won’t give them my business.” 

You might assume these verifiable digital credentials are just photos of conventional documents or plastic ID cards. But behind the curtain, there’s a lot more going on, involving advanced cryptography and hardware. While early adopters may care most about the convenience of carrying one less thing, the real point of digital identity is that it’s more private, works better online, and can’t be faked as easily as physical ID. 

So what are these digital cards, really – and how do they work? How are they different from physical ID or credit cards? 

Most importantly, if they’re just files on a smartphone, why are they trustworthy?

The Basics of Digital ID Technology

At the most basic level, a verifiable digital credential is not an image, but a string of numbers. They rely upon cryptographic signatures that can be protected by a chip in your smartphone called a “secure element.” This digital ‘signature’ is unique to the credential issuer – for example, all mobile driver’s licenses are digitally signed by a state’s Department of Motor Vehicles.

These ‘digital signatures’ aren’t simply copies of an image of a human signature. Instead, they’re unique alphanumeric identifiers that confirm a document’s authentic source. Thanks to nearly strong encryption methods, these signatures can’t be reproduced or impersonated by another entity. 

A verifiable digital credential can be checked in various ways by a verifier, such as a rental agent or traffic cop. In many cases, a verifier will already have a record of an issuer’s public signature (that is, an encrypted string of numbers that is uniquely tied to the issuer alone), and will be able to confirm a credential’s authenticity without pinging back to a centralized server. This is significant because it can reduce the digital ‘trail’ left behind when a credential is checked, and that trail is one notable privacy risk of this new all-digital system.

A physical credential uses quite different techniques to prove its authenticity. Physical anti-fraud measures including micro-printing, holograms, and see-through panels are the first line of defense against fakes. These physical measures work well enough for low-stakes conventional applications, like proving your age to buy alcohol. 

Physical Credential

Verifiable Digital Credential

Secured by holograms, bar codes, and databases

Secured by unique encrypted signatures

No batteries required

Requires at least some device power

Reveals all printed information when presented

Allows Selective Disclosure

Easily spoofed online

Secure for online use

Requires “phoning home” for full verification

Often verifiable without “phoning home”

Reissued every few years

Reissued regularly

Can be faked using AI

Requires physical infiltration to fake

But that example might highlight the problem: there are a lot of high school kids with fake IDs. Holograms and other physical security elements are a barrier, but they can be faked – whether in pursuit of underage drinking, or more nefarious goals. So in more serious face-to-face interactions, such as when you’re pulled over by a police officer, your ID may be checked remotely by sending your ID number to a central database. This incurs privacy risk since it effectively creates a record of your location or activities.

Things like holograms and microtext are particularly easy to fake when an ID is being used online. In fact, a rising wave of online identity fraud, for everything from opening bank accounts to applying for jobs, is a major motivation for the shift to digital identity. Verifiable digital credentials, unlike physical cards, are tailor-made for online use: because their confirming signatures are encrypted, they can be reliably confirmed online without the risk of being stolen or copied. 

Choosing your Data

GenZ may love leaving their wallet at home, but the biggest benefit of digital identity for most users will be having more control of your personal data – far, far more control.

A paper credential has to be handed over all at once to someone checking it, which usually means they’re getting way more information about you than they actually need. That can incur serious privacy risk, for instance if a bartender decides to take an interest in your home address.

Digital identity instead allows what’s known as “selective disclosure.” California’s mobile driver’s license is fairly typical – when the ID is checked, an app will display what information is being requested and only after authorizing the request, will the information be transmitted. 

Digital systems can also do even more surprising things with data, such as proving that you’re over 21 without disclosing your specific date of birth. These features are huge steps forward in user privacy and data control. 

Using Digital ID Offline 

Finally, you might wonder how a digital driver’s license or other credential works when your smartphone (or other device) isn’t connected to the internet. It’s increasingly rare, but there are still plenty of places and moments you just don’t have a wireless connection.

The good news is that a verifiable digital credential works just as well offline as when you’re connected to the internet. The digital signature that authenticates a credential is, again, stored directly on your device, not on a remote server. By the same token, verifiers will often already have a record of relevant issuer signatures, making it possible that they can verify your ID without an internet connection.

Different, and Mostly Better

This has been a high-level overview of some of the differences between physical and digital identity cards or other credentials. There is still much, much more going on under the surface, particularly when it comes to grasping how encryption and digital signatures work.

Hopefully it’s clear even at a glance that there are major differences between digital and physical credentials – including differences that will subtly change how we use and think about identification documents. Many of those differences are clear efficiencies, but a handful may make digital less convenient than paper credentials in particular ways. 

The advantages in user privacy and overall system security will hopefully make those tradeoffs worthwhile, but what’s clear is that the change is just over the horizon. If you need help navigating the new landscape, reach out to SpruceID.

Get in Touch

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


Ocean Protocol

F1 Racing 2024 Strategy Analysis Challenge— Final Classification

Introduction The 2024 Formula 1 Racing Challenge provided data scientists with detailed lap-by-lap data from the current F1 season. The challenge focused on analyzing key trace elements such as tire compounds, pit stop strategies, and lap times. Provided information included telemetry data covering each race, including variables like tire choices, stint lengths, lap times, and pit stop durations.
Introduction

The 2024 Formula 1 Racing Challenge provided data scientists with detailed lap-by-lap data from the current F1 season. The challenge focused on analyzing key trace elements such as tire compounds, pit stop strategies, and lap times. Provided information included telemetry data covering each race, including variables like tire choices, stint lengths, lap times, and pit stop durations. Participants used this information to explore patterns that influenced race outcomes.

Each participant performed Exploratory Data Analysis (EDA) to uncover relationships between variables like tire degradation and race performance. They analyzed how tire compounds, such as soft, medium, and hard, impacted lap times over different stints and how teams adjusted pit stop strategies depending on track conditions. Drivers’ positions and lap times were linked to tire management, showing the importance of optimizing tire usage for each race phase.

The analysis also explored how race length influenced pit stop frequency and tire choice, with drivers using multiple compounds across various stints. The participants applied correlation analysis to measure how decisions made during pit stops impacted final race positions.

Top submissions 1st Place: Yunus and Firuze

Yunus Gümüşsoy and Firuze Simay Sezgin approached the analysis with a comprehensive view of Formula 1 race strategies, which set their report apart. Although all participants had to analyze tire performance, pit stops, and lap times, Yunus and Firuze’s analysis emphasized how different strategy elements influenced race outcomes. They focused on how teams adapted to changing race conditions, showing how decisions related to tire wear management and pit stop timing played a crucial role in shaping performance.

Their report went beyond simply presenting data on tire compounds and lap times by examining how race interruptions, such as safety car deployments, forced teams to adjust their real-time strategies. Yunus and Firuze contextualized these strategic decisions within the flow of the race, explaining how early choices like tire selection and pit stop frequency had long-term impacts. By focusing on the timing and reasoning behind these decisions, they provided more profound insights into the effectiveness of the approaches taken by teams.

The strength of their report lies in how they connected these factors into a cohesive analysis. Instead of viewing tire performance or pit stops as isolated variables, Yunus and Firuze demonstrated how these elements were interdependent and how their combined effects influenced overall race strategy. This integrated approach clarified how teams balanced short-term decisions with long-term race objectives, offering a nuanced view of race strategy that went beyond surface-level observations.

2nd Place: Luca Ordronneau

Luca Ordronneau’s report distinguished itself by focusing heavily on the variability of tire strategies across different races. His analysis centered on how teams adjusted their use of tire compounds based on specific track characteristics, such as tire degradation rates and weather conditions.By comparing tire choices across multiple races, Luca highlighted how specific teams favored soft tires for shorter stints on high-degradation tracks. In contrast, others opted for medium or hard tires for endurance on less demanding circuits.

One of Luca’s key strengths was his attention to how tire strategies shifted throughout the race. He broke down the number of laps completed on each tire compound, revealing patterns teams used to optimize performance over different stints. Luca identified how specific teams started with hard compounds for longer first stints, then switched to softer compounds later in the race when lighter fuel loads allowed for more aggressive driving. This approach demonstrated how tire strategies were not only dependent on race conditions but also on each team’s overall race plan.

Luca’s report also emphasized the relationship between pit stop frequency and race outcomes. He explored how teams that made fewer stops tended to rely on harder compounds to minimize the time in the pits, while teams that aimed for faster lap times through more frequent stops focused on softer tires. This analysis provided insights into how strategic decisions about tire and pit stop management could either gain or lose time depending on the specific demands of each race, offering a clear view of how tire selection was tailored to maximize performance across varying conditions.

3rd Place: Maria Nacu

Maria Nacu’s report stood out through its detailed exploration of how tire choices and pit stop frequency influenced race positions. She focused on how teams managed stints, analyzing the number of laps completed on different tire compounds and the impact this had on race performance. Maria’s approach highlighted the importance of balancing tire wear with performance, showing how teams that favored medium and hard tires could maintain consistent lap times across longer stints.

A significant aspect of Maria’s analysis was her focus on how tire compound choices affected race outcomes under varying conditions. She examined how drivers adapted their strategies based on the track layout and weather conditions, identifying that some teams used more aggressive strategies with soft tires during shorter stints. In contrast, others relied on harder compounds for longer, more stable performance. Her insights into tire usage under wet and dry conditions provided a comprehensive view of how teams adjusted their approaches during unpredictable races.

Maria also paid close attention to the relationship between pit stop timing and final race standings. She analyzed how teams that timed their stops efficiently gained significant advantages, particularly when pit stops coincided with race interruptions or safety cars. By demonstrating how teams balanced the need for fresh tires with minimizing time lost in the pits, Maria’s report clearly explained how pit stop strategies influenced race outcomes, tying together tire management and race pacing in a cohesive way.

Interesting Facts

In the 2024 season, teams like Red Bull Racing and Aston Martin heavily favored medium tire compounds. These compounds provided a balance between speed and durability, making them a popular choice across most circuits.

Drivers who opted for aggressive tire strategies, particularly using softer compounds early in the race, often faced significant drops in performance by the final stints due to faster tire degradation.

An analysis of the Monaco Grand Prix revealed a robust correlation between drivers’ starting positions and their final race positions, highlighting the critical importance of qualifying performance on tight circuits.

Teams that used fewer pit stops but timed them efficiently, especially during critical race phases, often finished in higher positions, highlighting the importance of minimizing pit stop frequency while maintaining tire performance.

In races with higher tire wear, teams that strategically switched to medium tires during mid-race stints managed to maintain more consistent lap times. In contrast, teams that delayed tire changes experienced significant performance drops toward the end of the race.

2024 Championship

Our challenges offer prize pools from $10,000 to $20,000, distributed among the top 10 participants. Our points system for the championship allocates between 100 and 200 points to the top 10 finishers in each challenge, with each point valued at $100. Participants accumulate these points toward the 2024 Championship. Last year, the top 10 champions received an additional $10 for each point they had earned.

2024 Championship standings prior to the F1 Racing 2024 Strategy Analysis challenge

Additionally, the top 3 participants in each challenge can collaborate directly with Ocean to develop a profitable dApp based on their algorithm. Data scientists maintain their intellectual property rights while we provide support in monetizing their innovations.

About Ocean Protocol

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data.

Follow Ocean on Twitter or Telegram to stay up to date. Chat directly with the Ocean community on Discord, or track Ocean’s progress on GitHub.

F1 Racing 2024 Strategy Analysis Challenge— Final Classification was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.

Thursday, 17. October 2024

KuppingerCole

IAM meets ITDR: A Recipe for Robust Cybersecurity Posture

In today's digital landscape, identity is at the forefront of enterprise security. With a growing number of cyberattacks originating from compromised identities, organizations must adopt an identity-first security approach. This approach emphasizes proactive measures over reactive responses, crucial for minimizing risks and safeguarding sensitive information. Modern technology offers a solution

In today's digital landscape, identity is at the forefront of enterprise security. With a growing number of cyberattacks originating from compromised identities, organizations must adopt an identity-first security approach. This approach emphasizes proactive measures over reactive responses, crucial for minimizing risks and safeguarding sensitive information.

Modern technology offers a solution through Identity Threat Detection and Response (ITDR) tools. By integrating ITDR with Identity and Access Management (IAM) systems, organizations can effectively identify anomalies and remediate risks. This integration helps build a preemptive security posture, reducing the attack surface and aligning with Zero Trust principles.

John Tolbert, Director of Cybersecurity Research at KuppingerCole, will define the technical requirements for ITDR solutions. He will discuss how ITDR can enhance threat detection, support Zero Trust initiatives, and fortify perimeter security against identity-based threats.

Harshvardhan Lale, VP of Business Development, will delve into ARCON's ITDR engine. He will illustrate its role in detecting, remediating, and responding to identity threats. Additionally, he will highlight how ARCON's solutions secure sensitive information and reduce attack vectors, providing a robust cybersecurity posture.




HYPR

Microsoft’s SFI Offers a Blueprint for Identity Security

A few weeks ago, Microsoft issued its first Secure Future Initiative Progress Report. Launched in November 2023, the Secure Future Initiative (SFI) is Microsoft’s acknowledgement that it needs to drastically improve its cloud security posture and make cybersecurity its top priority. The company has dedicated a substantial chunk of its engineering workforce to the effort ”to address the

A few weeks ago, Microsoft issued its first Secure Future Initiative Progress Report. Launched in November 2023, the Secure Future Initiative (SFI) is Microsoft’s acknowledgement that it needs to drastically improve its cloud security posture and make cybersecurity its top priority. The company has dedicated a substantial chunk of its engineering workforce to the effort ”to address the increasing scale, speed, and sophistication of cyberattacks.” In line with this mandate, a key area of focus is the protection of identities and secrets.

Identity security continues to be the weakest link in the cyber defenses of the vast majority of organizations. According to recent research, over three-quarters of companies have been hit by identity-related attacks and 69% were breached through authentication processes. One of the main incidents prompting the formation of the SFI was an attack campaign by Storm-0558, in which the threat group used a stolen key that creates multi-factor authentication codes to break into the Microsoft 365 accounts of more than 25 organizations, including government agencies.

It’s encouraging to see that identity security places prominently in the progress report, with several key areas noting improvement. 

Phishing-Resistant Authentication as a Baseline

As an organization, Microsoft is making the move to phishing-resistant authentication, using passkeys or certificate-based authentication. They started with their production environment and are in the process of adoption and enforcement across all users in the productivity environment. This is exactly the right approach — tackle your most critical/at-risk systems first and roll out to the broader workforce in stages.

A phased method also fosters user acceptance. We’ve seen time and time again that passkey adoption gains a momentum of its own when users hear from their peers that it actually makes login faster and simpler. 

New Critical Control: Video-Based Verification

Perhaps the most interesting aspect of their identity security overhaul is how they now handle credential recovery situations. Although the industry widely acknowledges that knowledge-based factors are insufficient for verifying employees' identities, many organizations have yet to take action. Microsoft, however, has shifted to using video calls for user verification, aligning with the NIST 800-63-4 identity proofing guidelines for IAL2. Currently in its second public draft, NIST 800-63-4 provides important updates from 800-63-3 to combat modern identity threats leveraging new technologies and best practices.

By forcing video verification for credential recovery, Microsoft now effectively shuts down one of the fastest growing attack vectors, help desk social engineering. The $100 million dollar attack on MGM Resorts occurred when hackers convinced help desk personnel to reset an employee’s credentials to grant them access.

Here’s an example of what such a video-based credential recovery process could look like.

Improving Secrets Management

While authentication is a critical part of identity security, secrets management is just as important. Secrets — like API keys, encryption keys, and access tokens — are often the target of sophisticated threat actors. Large organizations like Microsoft, which deliver many services, often struggle with secrets management. The problem is only growing as enterprises are building more applications, more quickly, aided by large language  models (LLMs). The SFI emphasizes the need to make  secrets management and security a top priority.

This includes using hardware-based protection to store secret keys and tokens, automated rotation of secrets, and increasing visibility into the context and usage of secrets. This kind of telemetry is essential to detect misuse and forgeries. According to the progress report, Microsoft has completed implementing hardware-based storage for signing keys for public and US government clouds, and has made significant headway on the remaining fronts.

The Role of Automation in Identity Security

Automation is becoming a crucial tool in the fight against identity-related attacks. In the SFI report, Microsoft outlined how they are leveraging automation to detect and respond to identity threats in real-time. From automatically identifying suspicious login behaviors to auto-locking compromised accounts, automation ensures that potential threats are addressed quickly — before they can cause significant damage.

Organizations can adopt similar strategies by integrating automated identity protection tools that monitor user activity and enforce security policies consistently across all users and systems. These tools can help reduce human error and ensure faster response times to identity threats.

Action Steps for Enterprises

Microsoft’s SFI serves as a reminder for all security and engineering teams to uphold rigorous security standards and adhere to the latest industry best practices. In particular, organizations should take heed and prioritize identity protection in their own security roadmaps.

For businesses looking to take a page from Microsoft's playbook, here are a few key actions to consider:

Adopt Phishing-Resistant MFA: Transition away from traditional authentication methods, such as passwords and SMS-based MFA, to phishing-resistant options like passkeys (synced or device-bound) or certificate-based authentication. Implement Video-Based Verification: Follow Microsoft's lead and consider adopting video-based identity verification for critical credential recovery processes to combat social engineering threats. Leverage Automation: Use automated tools for identity verification, risk mitigation, and response to suspicious activity. Automation can act as a force multiplier for your security team, catching threats they may otherwise miss. Enhance Secrets Management: Ensure that secrets like API keys and access tokens are stored securely, regularly rotated, and closely monitored.

HYPR’s Identity Assurance platform combines phishing-resistant MFA, automated identity verification and real-time identity risk mitigation to combat today’s threats as well as those to come. It’s built to fit seamlessly into your current identity stack, whether that’s Microsoft Entra ID or another provider. To learn more, arrange a demo tailored to your environment and use cases.


Indicio

Decentralized identity and what if I lose my phone?

The post Decentralized identity and what if I lose my phone? appeared first on Indicio.
If you’re able to hold all your data on your phone, what happens if you lose it or it gets stolen?

By Trevor Butterworth

One of the key benefits of decentralized identity is that you now get to hold and control your data instead of a third party putting it in a database. The benefits are enormous in terms of privacy and security: no more tracking, no risk of a data breach putting your personal data at risk.

Scenario one — what happens if you lose your phone? Or someone steals it? Won’t they have access to all your data? And how do you get yours back?

First, you do exactly what you would do if you lost your wallet or passport: You inform your bank and the passport office.

Just as Verifiable Credentials are cryptographically verified, they can be cryptographically revoked. Once revoked, that’s it — they cannot be used by anyone ever again.

This means that you’ll have to go through the relevant identity assurance/KYC process again to get new credentials. This ensures that you — and not someone else — gets your new credential. Not having cloud backup increases security.

Scenario two — what could happen to my data between the point of losing my phone and my credentials being revoked?

There are two layers of security: Biometrics or a passcode to unlock a phone and biometrics or a passcode to unlock the digital wallet containing your credentials.

As long as you don’t use “000000” or “123456” for your passcode, the chances of a random six-digit number being guessed correctly are one in a million. Now add in a second different passcode for accessing your digital wallet and factor in phone lockouts after a certain number of incorrect guesses.

Similarly, someone who finds a lost phone or randomly steals one is also not going to be able to simulate your biometrics to access the phone from simply having your physical phone.

In short, when you combine the security benefits of holding your own data with the seamless authentication provided by cryptography, all wrapped in multiple layers of biometric-passcode security, Verifiable Credentials represent a massive net gain on how we currently manage identity online and off.

To learn more you can watch a recent demonstration of what happens when you lose your digital wallet on Indicio’s YouTube channel, or read about Indicio’s full solution Proven.

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post Decentralized identity and what if I lose my phone? appeared first on Indicio.


KuppingerCole

Guidance on Implementing Verifiable Credential Issuance

by Anne Bailey The organization interacts with many users including employees, customers, suppliers, and contractors. In order to flexibly and securely handle the variety of each user’s digital journey in an interoperable way, organizations must shift to more user-controlled methods. OpenID for Verifiable Credential Issuance (OID4VCI) is an emerging standard that bridges the gap between a new mode

by Anne Bailey

The organization interacts with many users including employees, customers, suppliers, and contractors. In order to flexibly and securely handle the variety of each user’s digital journey in an interoperable way, organizations must shift to more user-controlled methods. OpenID for Verifiable Credential Issuance (OID4VCI) is an emerging standard that bridges the gap between a new model of digital identity interaction and the known and often already implemented standards. The scope of this paper is to provide guidance to Identity and Access Management (IAM) and security architects on implementing OIDC4VCI. This whitepaper provides context for the user-controlled model, separates hype from reality, and identifies early learnings for organizations that are ready to issue verifiable credentials.

UNISOT

QR Codes Unlock Real-Time, Verifiable Data with UNISOT’s DPP

QR CODES UNLOCK REAL-TIME, VERIFIABLE DATA WITH UNISOT’S DPP Celebrating this fantastic collaboration! It’s incredibly inspiring to witness how ProfilSport and Nordic Textile are leading the way in both sustainability and transparency in the fashion industry. By integrating UNISOT’s innovative Digital Product Passport (DPP) solution, the Telenor Xtra program is not The post QR Codes Unlock Re
QR CODES UNLOCK REAL-TIME, VERIFIABLE DATA WITH UNISOT’S DPP

Celebrating this fantastic collaboration! It’s incredibly inspiring to witness how ProfilSport and Nordic Textile are leading the way in both sustainability and transparency in the fashion industry. By integrating UNISOT’s innovative Digital Product Passport (DPP) solution, the Telenor Xtra program is not only enhancing the quality of its products but also empowering consumers like never before. This partnership sets a new benchmark for providing real-time, transparent information about every garment, ensuring that consumers have access to critical details such as material origin, production practices and environmental impact.

One of the most exciting aspects of this collaboration is that the QR code on each garment doesn’t just lead to a static, pre-prepared website. Instead, it links directly to the actual, original data stored on the Enterprise Blockchain, offering dynamic, verifiable information that is continually updated throughout the product’s lifecycle. This gives consumers unprecedented access to live data about the products they purchase, building trust and confidence in the brand while ensuring transparency at every stage.

With the focus on 100% recyclable fabric, this initiative isn’t just about creating high-quality sportswear—it’s about embracing a future where sustainability and accountability are at the forefront of every decision. The ability for consumers to scan a QR code and instantly access a garment’s DPP creates a level of engagement and trust that is unmatched. This transparency encourages informed consumer choices and elevates brand loyalty by aligning with the growing demand for ethical and eco-friendly fashion.

The Telenor Xtra program, powered by ProfilSport and Nordic Textile, is truly setting the stage for the future of fashion. By leveraging the UNISOT Asset Traceability Platform, this collaboration is pushing the boundaries of what is possible in terms of sustainable innovation. It’s exciting to see how these brands are not only meeting regulatory requirements but exceeding expectations by offering a solution that integrates technology, traceability and consumer empowerment.

This is the future of fashion — sustainable, innovative and accountable! We are thrilled to be part of this journey, supporting a movement that prioritizes transparency and paves the way for a more responsible and connected fashion industry.

The post QR Codes Unlock Real-Time, Verifiable Data with UNISOT’s DPP appeared first on UNISOT.


KuppingerCole

NIS2 Reality Check: The Deadline Is Here – Are We Ready?

by Matthias Reinwarth Today marks a critical deadline for all EU member states: October 17, 2024, the date by which the NIS2 Directive must be transposed into national law. For some, this milestone has been met with progress and precision. For others, particularly Germany, the delay in implementation highlights a significant gap between political rhetoric and actionable cybersecurity policy. Wh

by Matthias Reinwarth

Today marks a critical deadline for all EU member states: October 17, 2024, the date by which the NIS2 Directive must be transposed into national law. For some, this milestone has been met with progress and precision. For others, particularly Germany, the delay in implementation highlights a significant gap between political rhetoric and actionable cybersecurity policy.

Why NIS2 Matters

The NIS2 Directive is designed to strengthen cybersecurity across the European Union by establishing a uniform baseline of security measures, focusing on critical infrastructure, incident reporting, and cross-border coordination. The Directive itself is a powerful tool, but there’s a catch: it requires individual member states to translate its provisions into national law, a process that leaves room for delays and inconsistencies. Had it been passed as a regulation; its immediate applicability would have ensured more streamlined and consistent compliance. But as it stands, the uneven pace of implementation across member states threatens to undermine its potential impact.

Germany, currently six months behind schedule, exemplifies the challenges in turning political promises into tangible action. While the cybersecurity conversation remains a popular talking point in speeches, the urgency of addressing real-world cyber risks seems underestimated. And in a world where cyberattacks are increasingly sophisticated and frequent, every delay leaves critical infrastructure more exposed.

Missing Pieces: The “Durchführungsverordnung”

As usual: I am not a lawyer, but one of the most pressing challenges for organizations preparing for NIS2 compliance is obviously the absence of detailed regulatory guidance. Some legal instruments are still missing, something like a “Durchführungsverordnung” (Implementing Regulation) as they exist on an EU level. This should provide the concrete, actionable details on how the directive’s rules are to be enforced and what specific technical standards must be met.

Such a specification is expected and needed to offer the necessary administrative and procedural details at the national level, ensuring organizations know exactly what is expected of them. In Germany, having access to such a detailed document is crucial for organizations to understand their obligations under NIS2. Without it, they just cannot develop the processes they need to comply effectively, and that puts both their operations and security posture at risk.

The Need for Well-Defined Notification Duties

A core aspect of NIS2 is the requirement for organizations to report cybersecurity incidents, especially those that threaten critical infrastructure. However, the details of what exactly constitutes a reportable incident remain unclear. This “fuzziness” in definitions means organizations could either over-report, leading to unnecessary administrative burden, or under-report, leaving serious threats unnoticed.

Beyond incident reporting, it’s essential that organizations receive timely feedback from authorities. A well-defined feedback loop allows businesses to adjust their security strategies based on emerging threats and evolving attack vectors. But, until clear guidance is issued, these processes remain underdeveloped, leaving companies unsure of how to respond to incidents and how to improve their cybersecurity posture in real-time.

Going Beyond ISO 27001: Meeting NIS2’s Requirements

Many organizations might think that being compliant with ISO 27001 or other established cybersecurity frameworks is enough. While ISO 27001 offers a strong foundation - focusing on risk management, information security, and control structures - it falls short of the specific requirements imposed by NIS2. The Directive goes further, introducing mandatory reporting obligations, sector-specific rules, and increased regulatory oversight. In short, organizations need to go beyond their traditional control frameworks to fully meet NIS2’s stringent demands.

More Than Just Technology: A Holistic Approach to Compliance

One of the most underestimated aspects of NIS2 is its focus on a holistic approach to cybersecurity. Compliance isn’t just about having the right technology in place; it’s about creating a robust framework that includes policies, processes, organizational structure, and people. Each of these elements plays a crucial role in ensuring that an organization can not only prevent incidents but respond effectively when they occur.

Policies: Clear and enforceable security policies are the foundation of any cybersecurity strategy. These policies need to be aligned with both the organization’s goals and regulatory demands, providing a formal framework that governs the use of technologies and the response to incidents. Processes: Incident response, risk assessments, and continuous monitoring must be integrated into daily operations. These processes define how threats are detected, reported, and mitigated, ensuring that organizations are prepared to meet NIS2’s strict reporting timelines. Organizational Structure: Cybersecurity efforts must be coordinated across the entire organization. This includes having clear governance structures, with defined roles for key personnel such as the CISO, compliance officers, and dedicated security teams. People: Human error is often the weakest link in cybersecurity. NIS2 emphasizes the need for regular training and awareness programs, ensuring that all employees - not just IT staff - are aware of the risks and know how to respond to threats. The Clock Is Ticking

Despite the delays in many EU member states, the urgency to act is real. Organizations that have not yet begun their compliance journey are at significant risk, and even those that are somewhat prepared still face challenges in aligning with the directive’s requirements. Waiting for final regulations to be fully in place is not an option - time is running out, and achieving compliance will require significant time, effort, and resources.

KuppingerCole Analysts are well-equipped to assist organizations on their journey to compliance and cybersecurity maturity. Our advisory team brings extensive experience in supporting clients through complex cybersecurity initiatives, and we’ve already laid significant groundwork in the areas of ISO 27001 and TISAX certifications, helping businesses strengthen their security frameworks and meet industry standards. Our experts can provide tailored advice and actionable strategies to ensure that your organization is on the right track.

Here’s how we can further support your cybersecurity efforts:

New Membership for Cybersecurity Research: We’ve launched a new membership offering that provides exclusive access to cutting-edge cybersecurity research, helping organizations stay ahead of emerging threats and compliance challenges. Members also benefit from direct access to our analysts and advisors, offering personalized guidance to navigate regulatory changes like NIS2 or tackle specific cybersecurity issues your organization may face. The cyberevolution 2024 Event in December: Don’t miss our upcoming event, cyberevolution2024, taking place December 3-5, 2024 in Frankfurt, Germany. This event will bring together cybersecurity practitioners, industry experts, and thought leaders to discuss the latest trends, challenges, and solutions in the cybersecurity landscape. The conference will feature a wide range of tracks covering critical topics like NIS2 compliance, Zero Trust, identity-centric security, and much more. It’s the perfect opportunity to network with peers, learn from top experts, and gain insights that can help you implement robust cybersecurity measures.

The deadline may be today, but the journey is just beginning.


Ontology

Ontology Weekly Report

October 7th — 14th, 2024 Ontology Network 🌐 Latest Updates NOWChain Campaign: Don’t forget, our ongoing campaign with NOWChain is still active! Keep participating to earn rewards and stay involved. DIF Hackathon Workshop: If you missed our DIF Hackathon workshop, don’t worry! We have the recording for everyone to catch up on the insights and demos. Orange Protocol 🍊 D
October 7th — 14th, 2024 Ontology Network 🌐 Latest Updates NOWChain Campaign: Don’t forget, our ongoing campaign with NOWChain is still active! Keep participating to earn rewards and stay involved. DIF Hackathon Workshop: If you missed our DIF Hackathon workshop, don’t worry! We have the recording for everyone to catch up on the insights and demos. Orange Protocol 🍊 Decentralized Identity and Privacy New Partnership: Orange Protocol has partnered with TRikon to further decentralize identity and enhance privacy in Web3. Stay tuned for updates on this exciting collaboration! 3. Community 🌍 Engagement and Growth Community Catch-Up: As usual, our Community Catch-Up session was held, with great insights shared across our community. Australia Node Highlight: Check out the latest post from our Australia Node team, celebrating their achievements and sharing future plans. 4. Stay Connected 📱

Stay engaged with Ontology by following us on our social media channels. Your active participation is key as we continue building a more secure, decentralized, and inclusive digital world.

Follow us: Ontology website / ONTO website / OWallet (GitHub) / Twitter / Reddit / Facebook / LinkedIn / YouTube / NaverBlog / Forklog / Telegram / Announcement / Telegram English / GitHubDiscord

Ontology Weekly Report was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


KuppingerCole

The EUDI Wallet: A First Step on Germany’s Way Into a Flexible Digital Identity Future

by Martin Kuppinger Germany has officially launched the German part of the EUDI Wallet initiative, a significant advancement in digital identity management as part of the broader European eIDAS 2.0 regulation. The recent announcement made by the German Ministry of the Interior was about the way forward until the planned availability in 2027 and openness for private 3rd parties to also provide cer

by Martin Kuppinger

Germany has officially launched the German part of the EUDI Wallet initiative, a significant advancement in digital identity management as part of the broader European eIDAS 2.0 regulation. The recent announcement made by the German Ministry of the Interior was about the way forward until the planned availability in 2027 and openness for private 3rd parties to also provide certified EUDI wallets in Germany.

The EUDI Wallet initiative, with every member state being obliged to provide such wallets to the citizens by 2027, enables citizens to carry secure digital IDs on their smartphones, making the digital future more accessible. Yet, the impact of this move extends far beyond simply digitizing ID cards; it opens up a world of potential for decentralized identity, cross-border verification, and secure data sharing.

Flexibility Through Openness: A New Paradigm for Digital Identity

One of the most exciting aspects of the EUDI Wallet is its open architecture. Instead of being limited to a single governmental solution, users will be able to choose from various wallets provided by both public and private entities. This marks a departure from older, more restrictive digital identity systems. The result? A competitive marketplace where innovation can flourish, but backed by standards for interoperability. Users will benefit from wallets tailored to specific needs, whether for travel, finance, or even healthcare. 

The inclusion of private providers fosters an environment where innovation is not only encouraged but essential. For instance, wallets could soon support more than just digital identification—they could incorporate advanced functionalities like micropayments or even specialized uses in sectors like healthcare, finance, or mobility. Envision having a travel wallet or finance wallet for everything you require for these use cases.

Innovation and Competition: The Role of Private Providers

The key to this transformation lies in the involvement of private companies, which will create new opportunities for competition and innovation. The OpenWallet Foundation, led by Daniel Goldscheider, is already laying the groundwork for open-source, interoperable wallets. This approach could lead to highly dynamic digital wallets that are capable of interacting seamlessly across different sectors and services.

However, the exact requirements for private wallet providers remain a bit unclear. Certification processes and regulatory hurdles could pose challenges. These will need to be addressed to ensure the widespread adoption of private wallets while maintaining security and interoperability standards.

Beyond Basic Functionality: A Multi-Purpose Digital Wallet

While traditional digital wallets have often been confined to single-purpose applications, the EUDI Wallet aims to break this mold. The open ecosystem allows for multi-functional wallets that can serve various purposes. Imagine using a single wallet not just for banking, but also for travel bookings, healthcare appointments, and even identity verification across borders.  Or, as mentioned above, feature-rich apps involving a wallet for such use cases.

This versatility could prove transformational for both consumers and businesses. The ability to incorporate decentralized identities, secure payments, and cross-border functionality into a single solution could unlock significant economic value and streamline a range of processes.

Decentralized Identity: A Game-Changer

Decentralized identities are a cornerstone of this vision. Instead of being controlled by a central authority, decentralized identities allow individuals to have greater control over their personal data. This could revolutionize the way we interact with digital services, providing a higher level of security and privacy. 

Decentralized IDs will also play a critical role in broader, more complex use cases. For instance, secure data sharing and cross-border verification processes will benefit immensely from this technology. This is not just about convenience, but about creating a more secure, interoperable digital future.

The Path Forward: What’s Next for the EUDI Wallet?

Despite the remaining uncertainties, one thing is clear: the EUDI Wallet has the potential to redefine how we think about digital identity. Initiatives like eIDAS 2.0 and the OpenWallet Foundation are laying the groundwork for a future where digital identity systems are not just interoperable but truly dynamic.

As more complex use cases come into play, the value of this technology will become increasingly apparent. For instance, as highlighted in my EIC keynote, the broader application of wallets—whether for decentralized identity verification, application to loans at banks, or onboarding employees—will unlock significant benefits for both users and the economy at large. Germany’s openness for supporting 3rd party EUDI Wallet is hopefully just the beginning, and we can expect significant advancements in digital identity in the coming years.

The EUDI Wallet represents a bold move toward a more open, flexible, and innovative digital identity ecosystem. With the involvement of both public and private providers, the future of digital identity looks promising—and it’s happening now.


Nov 27, 2024: Don’t Let the Endpoints Become the Entry Door for Attackers

Most cyberattacks are identity-based and come in via endpoints. Identity Security on one hand and Endpoint Protection on the other thus are cornerstones of every successful cybersecurity strategy. EPDR (Endpoint Protection, Detection & Response) has evolved as a unified approach that goes beyond traditional anti-malware and EPP (Endpoint Protection Platform) and adds detective and responsive ca
Most cyberattacks are identity-based and come in via endpoints. Identity Security on one hand and Endpoint Protection on the other thus are cornerstones of every successful cybersecurity strategy. EPDR (Endpoint Protection, Detection & Response) has evolved as a unified approach that goes beyond traditional anti-malware and EPP (Endpoint Protection Platform) and adds detective and responsive capabilities. It also closely integrates with further detective and responsive technologies such as XDR (eXtended Detecton & Response).

Ontology

Ontology Monthly Report — September 2024

Ontology Monthly Report — September 2024 Empowering the community, celebrating achievements, and driving innovation. Ontology’s DIF Hackathon is officially live! Build with ONT ID and showcase your talent to the world of decentralized identity. We’re thrilled to see what our community will create! Events and Partnerships 🤝 This month, we were part of exciting events and collaborations, wo
Ontology Monthly Report — September 2024

Empowering the community, celebrating achievements, and driving innovation.

Ontology’s DIF Hackathon is officially live! Build with ONT ID and showcase your talent to the world of decentralized identity. We’re thrilled to see what our community will create!

Events and Partnerships 🤝

This month, we were part of exciting events and collaborations, working with innovative partners to expand the Ontology ecosystem.

NOWchain’s Autumn Celebration: We joined in celebrating NOWchain’s Autumn event, connecting with like-minded projects and communities. AMA with Tuna Chain & Goshen: An interactive session where we discussed blockchain advancements and collaborations. Kenzo Labs Collaboration: We teamed up with Kenzo Labs to further strengthen our Web3 solutions. Glacier AMA: Engaging insights were shared in our latest AMA with Glacier. On-Chain Metrics 📊

Here’s a snapshot of this month’s network activity and performance:

878 nodes: Our network continues to grow, with a total of 878 nodes. The staking rate stands at 25.976%. 19,588,331 on-chain transactions: We are seeing robust network activity, with nearly 20 million transactions on-chain. 177 DApps: The ecosystem remains vibrant, with 177 DApps contributing to a total of 7,793,147 transactions. Community Engagement 💬

Our community continues to be at the heart of Ontology, driving discussions and engagement.

Privacy Hour: As usual, Privacy Hour was held, where members gathered to discuss the latest privacy-related developments in Web3. Community Update: Regular community updates took place, keeping everyone informed and aligned on Ontology’s progress. What’s Next? 🎯

Exciting things are on the horizon!

DIF Hackathon Workshop: Mark your calendars for October 8th! Join us for the DIF Hackathon Workshopand get hands-on with ONT ID. Follow Us 📱

Keep up with Ontology by following us on our social media channels. Your continued support and engagement are vital to our shared success in the evolving world of blockchain and decentralized technologies.

Ontology website / ONTO website / OWallet (GitHub)

Twitter / Reddit / Facebook / LinkedIn / YouTube / NaverBlog / Forklog

Telegram Announcement / Telegram English / GitHubDiscord

Ontology Monthly Report — September 2024 was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


Ontology Monthly Report — August

Ontology Monthly Report — August Community and Web3 Influence 🌐🤝 DAO Governance: We had important discussions about DAO governance this month, with great insights shared across the community. Ontology Odyssey: A new series, Ontology Odyssey, was published, bringing fresh perspectives on decentralized identity and blockchain. Development/Corporate Updates 🔧 Development Milestones 🎯
Ontology Monthly Report — August Community and Web3 Influence 🌐🤝 DAO Governance: We had important discussions about DAO governance this month, with great insights shared across the community. Ontology Odyssey: A new series, Ontology Odyssey, was published, bringing fresh perspectives on decentralized identity and blockchain. Development/Corporate Updates 🔧 Development Milestones 🎯 Ontology Consensus Upgrade: 15% completion achieved. ONT Leverage Staking Design: 80% complete, this innovative staking design will soon offer flexible staking options for our community. Events and Partnerships 🤝 Galactica AMA: We participated in a joint AMA session with Galactica. Node Competition: The winner of the node competition was announced. Moongate AMA: We held an engaging AMA with Moongate. TUNA Chain Collaboration & AMA: Ongoing collaboration with TUNA Chain. BCCoin Collaboration: New partnership initiated with BCCoin. GPT Wars Collaboration: Collaboration with GPT Wars to explore further integrations. AITECH Collaboration: New collaboration with AITECH. HYVE Job Listings: Job opportunities posted on HYVE to bring more talent into the ecosystem. ONTO Wallet Developments 🌐🛍️ Partnership with NxFi: ONTO Wallet teamed up with NxFi to introduce enhanced wallet functionalities. On-Chain Metrics 📊 dApp Ecosystem Growth: Our MainNet continues to support 177 dApps, demonstrating a healthy and active ecosystem. Transaction Increases: This month saw 1,943 dApp-related transactions and 10,519 MainNet transactions, reflecting increased usage and engagement across the network. Community Engagement 💬 Lively Discussions: Our community platforms were buzzing with insights and discussions from passionate members. NFT Recognition: Active community members were recognized for their contributions with exclusive NFTs, celebrating their engagement and support. Follow Us on Social Media 📱

Keep up with Ontology by following us on our social media channels. Your continued support and engagement are vital to our shared success in the evolving world of blockchain and decentralized technologies.

Ontology website / ONTO website / OWallet (GitHub)

Twitter / Reddit / Facebook / LinkedIn / YouTube / NaverBlog / Forklog

Telegram Announcement / Telegram English / GitHubDiscord

Ontology Monthly Report — August was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


Ocean Protocol

DF111 Completes and DF112 Launches

Predictoor DF111 rewards available. DF112 runs Oct 17— Oct 24, 2024 1. Overview Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor. Data Farming Round 111 (DF111) has completed. DF112 is live today, Oct 17. It concludes on October 24. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE re
Predictoor DF111 rewards available. DF112 runs Oct 17— Oct 24, 2024 1. Overview

Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor.

Data Farming Round 111 (DF111) has completed.

DF112 is live today, Oct 17. It concludes on October 24. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF112 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF112

Budget. Predictoor DF: 37.5K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and Predictoor

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF111 Completes and DF112 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wednesday, 16. October 2024

Northern Block

Northern Block Secures Funding from Natural Resources Canada

Driving Digital Trust in the Mining Sector Through Sustainability Credentials, Enhancing Supply Chain Transparency and Global Accountability. The post Northern Block Secures Funding from Natural Resources Canada appeared first on Northern Block | Self Sovereign Identity Solution Provider. The post Northern Block Secures Funding from Natural Resources Canada appeared first on Northern Block | S

Northern Block Secures Funding from Natural Resources Canada to Drive Digital Trust in the Mining Sector Through Sustainability Credentials, Enhancing Supply Chain Transparency and Global Accountability.

We are thrilled to announce that we have been awarded grant funding from Natural Resources Canada’s (NRCan) Global Partnerships Initiative, part of the Canadian Critical Minerals Strategy. This investment is not only a testament to the prior milestones we have accomplished in the mining ecosystem, but also an exciting leap forward in our mission to ‘credentialize’ sustainability data and enhance supply chain transparency across Canada’s critical minerals sector.

Transforming the Mining Ecosystem with Digital Credentials

Our ongoing work in the mining sector has centered around digitally transforming key sustainability metrics, particularly those aligned with the Towards Sustainable Mining (TSM) standard, into standards-based digital credentials. These credentials allow mining companies to demonstrate their commitment to responsible business practices and make that data securely available to key stakeholders, including investors and supply chain participants. The momentum we’ve built with the Mining Association of Canada (MAC), a strategic partner of ours, has been pivotal. We are now building on the successes we’ve already demonstrated, such as enabling mining operators to self-issue verified TSM reports as digital credentials. This progress was also made possible by the leadership and investment of the BC Government in the Energy and Mines Digital Trust (EMDT) program. This has resulted in greater transparency and trust in the supply chain, as highlighted in our previous work. This new federal funding will further accelerate the scale and impact of our efforts, ensuring that digital credentials become an integral part of the global mining ecosystem.

With NRCan’s support, Northern Block will continue to streamline and digitize the reporting processes for MAC members, transforming sustainability reports into secure, digital credentials. These efforts directly support global sustainability initiatives like the United Nations Transparency Protocol, ensuring that data from initiatives such as TSM can be integrated into digital product passports, and contribute to more transparent and sustainable supply chains globally.

Expanding the Reach of TSM with Digital Product Passports

Our efforts, led by the Mining Association of Canada (MAC) and the Towards Sustainable Mining (TSM) standard, are setting the benchmark for sustainability data in the mining industry. TSM has taken the lead in creating verifiable digital credentials that enhance transparency and trust across the entire supply chain. As these digital credentials gain traction, we’re seeing increasing interest from other global standards bodies, eager to follow the path MAC has paved.

Our work ensures that TSM credentials will serve as a foundational data source for emerging digital product passports, making it easier for mining companies to securely share critical information on sustainability, ethical sourcing, and environmental impact. By positioning TSM at the forefront, we’re not only adding value to MAC members but also shaping the future of supply chain transparency for critical minerals worldwide.

Exciting Releases Ahead for 2024

As we look ahead, we are excited about the upcoming releases in Q4 2024, developed in partnership with the Mining Association of Canada and several of their key members, including major mining operators. These developments will further solidify the role of digital credentials in the mining industry and reinforce Canada’s leadership in responsible mining practices.

Thanks to the support from Natural Resources Canada, we now have a three-year roadmap that allows us to double down on building and expanding this ecosystem. This grant represents a significant validation of the work we’re doing and the value we’re creating, not just for our current partners but for the entire critical minerals ecosystem. We’re eager to leverage this momentum and continue driving innovation across the mining sector.

For more information, please contact:

Northern Block:

Website: northernblock.io Email: Mathieu Glaude, Founder & CEO – mathieu@northernblock.io The post Northern Block Secures Funding from Natural Resources Canada appeared first on Northern Block | Self Sovereign Identity Solution Provider.

The post Northern Block Secures Funding from Natural Resources Canada appeared first on Northern Block | Self Sovereign Identity Solution Provider.


KuppingerCole

Dec 17, 2024: HigherEd CIO Virtual Summit: Driving IT Efficiency With Automation of Student Matriculation and Access Governance

In the post-COVID era, higher education institutions face unprecedented challenges in managing student matriculation and access governance. With IT departments stretched thin and an influx of in-person students, the risk of over-provisioning and compliance violations has skyrocketed. Balancing efficiency with data security and privacy concerns with FERPA, HIPAA, and other regulations has become a c
In the post-COVID era, higher education institutions face unprecedented challenges in managing student matriculation and access governance. With IT departments stretched thin and an influx of in-person students, the risk of over-provisioning and compliance violations has skyrocketed. Balancing efficiency with data security and privacy concerns with FERPA, HIPAA, and other regulations has become a critical issue for HigherEd CIOs and IT professionals.

Nov 20, 2024: Transforming SOCs: The Power of SOAR Solutions

Cyberattacks are becoming increasingly sophisticated, requiring innovative approaches to cybersecurity. This webinar will explore how Security Orchestration, Automation, and Response (SOAR) platforms can revolutionize incident response by providing security teams with advanced threat detection and mitigation tools. We'll discuss the challenges of traditional SIEM systems and the transformative pote
Cyberattacks are becoming increasingly sophisticated, requiring innovative approaches to cybersecurity. This webinar will explore how Security Orchestration, Automation, and Response (SOAR) platforms can revolutionize incident response by providing security teams with advanced threat detection and mitigation tools. We'll discuss the challenges of traditional SIEM systems and the transformative potential of integrating generative AI into SOAR solutions.

Metadium

25–27 Circulation Disclosure and Distribution Plan

Dear Community, Today, we are announcing a new distribution plan for 2025 to 2027. The distribution for each year will be appropriately allocated to miner rewards, maintenance, ecosystem activation, and foundation operational costs, aiming to maintain a balanced ecosystem. [Distribution Plan] 2025: 24,000,000 META 2026: 24,000,000 META 2027: 24,000,000 META The total amount for 2025–2027 is 72,

Dear Community,

Today, we are announcing a new distribution plan for 2025 to 2027. The distribution for each year will be appropriately allocated to miner rewards, maintenance, ecosystem activation, and foundation operational costs, aiming to maintain a balanced ecosystem.

[Distribution Plan]
2025: 24,000,000 META
2026: 24,000,000 META
2027: 24,000,000 META

The total amount for 2025–2027 is 72,000,000 META. Below are the details of the scheduled distribution for each category over the three years.

Miner Rewards (11.67%, Total 8,400,000 META)
Miners play a key role in securing the network and processing transactions. Rewards for their contributions are essential for the stable operation of the network. Maintenance (36.33%, Total 26,160,000 META)
Adequate resources are needed for the blockchain’s long-term maintenance and technical upgrades of the blockchain. This ensures ongoing network stability and technical support. Ecosystem Activation (32%, Total 23,040,000 META)
Through marketing, partnership building, and community expansion, we aim to keep the blockchain ecosystem active, focusing on attracting more users and increasing the platform’s value. Foundation Operational Costs (20%, Total 14,400,000 META)
Foundation operational costs are essential for managing and expanding the blockchain ecosystem. The foundation oversees the project’s direction and uses these funds for sustainable operations.

We sincerely thank the community for your continued support and will continue our efforts to develop the Metadium ecosystem further.

안녕하세요, 메타디움 팀입니다.

오늘 우리는 2025년부터 2027년까지의 새로운 유통량 계획을 발표합니다. 각 연도별 유통량은 Miner 보상, 유지보수, 생태계 활성화, 재단 운영비에 적절히 분배될 예정이고 이는 균형 있는 생태계를 유지하기 위한 계획입니다.

[유통량 계획]

2025년: 24,000,000 META

2026년: 24,000,000 META

2027년: 24,000,000 META

25–27년간 예정된 총 유통량은 72,000,000 META입니다. 아래는 항목별 세부 내용입니다.

1. Miner 보상 (11.67%, Total 8,400,000 META)

채굴자들은 네트워크 보안과 트랜잭션 처리를 담당하는 핵심 역할을 합니다. 그들의 기여에 대한 보상은 네트워크의 안정적 운영을 위해 필수적입니다.

2. 유지보수 (36.33%, Total 26,160,000 META)

블록체인의 장기적인 유지와 기술 업그레이드를 위해 충분한 자원이 필요합니다. 이를 통해 네트워크 안정성과 기술적 지원이 지속적으로 보장됩니다.

3. 생태계 활성화 (32%, Total 23,040,000 META)

마케팅, 파트너십 구축, 커뮤니티 확장을 통해 블록체인 생태계를 활발히 유지하고, 더 많은 사용자 확보와 플랫폼 가치를 높이는 데 중점을 둡니다.

4. 재단 운영비 (20%, Total 14,400,000 META)

재단 운영비는 블록체인 생태계를 관리하고 확장하는 데 필수적입니다. 재단은 프로젝트의 방향성을 관리하며, 지속 가능한 운영을 위해 이 자금을 사용합니다.

커뮤니티의 지속적인 성원에 감사드리며, 앞으로도 메타디움 생태계를 발전시키기 위한 노력을 이어가겠습니다.

감사합니다.

메타디움 팀 Website | https://metadium.com Discord | https://discord.gg/ZnaCfYbXw2 Telegram(EN) | http://t.me/metadiumofficial Twitter | https://twitter.com/MetadiumK Medium | https://medium.com/metadium

25–27 Circulation Disclosure and Distribution Plan was originally published in Metadium on Medium, where people are continuing the conversation by highlighting and responding to this story.

Tuesday, 15. October 2024

KuppingerCole

A False Sense of Security: Authentication Myths That Put Your Company at Risk

In today's digital landscape, organizations often fall prey to a false sense of security, particularly concerning authentication practices. Misconceptions about identity security can leave companies vulnerable to evolving threats, potentially compromising sensitive data and systems. Understanding the realities behind these myths is crucial for developing robust authentication strategies. Modern

In today's digital landscape, organizations often fall prey to a false sense of security, particularly concerning authentication practices. Misconceptions about identity security can leave companies vulnerable to evolving threats, potentially compromising sensitive data and systems. Understanding the realities behind these myths is crucial for developing robust authentication strategies.

Modern technology offers advanced solutions to address these authentication challenges. By leveraging dynamic risk flows, adaptive authentication methods, and comprehensive identity management systems, organizations can significantly enhance their security posture. These technologies enable a more nuanced and effective approach to authentication, moving beyond static, one-size-fits-all solutions.  
Paul Fisher, Lead Analyst at KuppingerCole, will moderate this insightful session. He will guide the discussion, ensuring that key authentication myths are thoroughly examined, and that practical, actionable insights are shared with the audience. His expertise will help frame the conversation within the broader context of identity and access management.

Stuart Sharp and Alicia Townsend from One Identity will investigate common authentication myths, such as the adequacy of MFA alone and the perceived security of certain authentication methods. They will provide strategies for identifying user communities, classifying risk by application, and developing dynamic authentication flows to reduce lateral movement risks and enhance overall security.  




Indicio

Why you need to add Verifiable Credentials to your biometric authentication systems

The post Why you need to add Verifiable Credentials to your biometric authentication systems appeared first on Indicio.
Biometric authentication is a brilliant solution to the problem of passwords and usernames for identity access management that replicates one of the worst features of password management. Now, that feature threatens to create havoc across the world’s biometric identity systems. We discuss how Verifiable Credentials can solve the problem.

By Sam Curren

The value of biometrics in authentication

You can’t forget your face; it’s always with you, making it a powerful way to manage identity access. But, realistically (which is to say, outside Mission Impossible movies), you can’t reset your face if its digital copy gets stolen. Unfortunately, that’s just the risk we’ve created by storing biometric data for verification in centralized databases.

The good news is that you don’t have to scrap your existing biometric authentication systems.

Biometrics are fast, cool, unforgettable (in the password sense), and easy to use. We want you to use them, we just want to be sure that your end users don’t pay the price.

How biometrics work

Traditional biometric systems are very simple. There are two pieces that work together, enrollment and verification.

Enrollment is where your biometric data is collected, it usually requires a few different tries of scanning your face or finger to collect a few samples so that the system can compute a template with some allowance for variation. This template is then stored in a database to be accessed and compared to information you present when trying to access the system.

When a biometric is collected at an access point through a scan, it is compared to the stored biometric template. Most systems are not looking for an exact match; instead, they compare key points in the template, such as the position of the swirls and loops in your fingerprint. The combination of the variation in individual biometrics and the selection of key biometric points allows for accurate identification given random variations during scanning — such as a finger not quite aligned in the same way as when the template scan was conducted.

Databases are the downfall of biometrics (and really any user data you’re trying to keep secure)

The problem facing biometric systems is the same one that has plagued logins and passwords since their inception. The reliance on a centralized database for the system to function and the fact that storing all this biometric data in one place is an attractive target for a data breach.

This turns a security problem into an existential risk for biometric databases because, unlike passwords and logins that can be quickly changed in the event of a breach, people’s biometrics are largely unchangeable and will continue to be compromised. Once a database has been compromised and the information is out there it becomes easy for bad actors to generate false positives that will work on any other systems tied to the user’s biometrics.

Solving the problem (bring your own biometrics)

When biometrics were first introduced there wasn’t really a good way around the database problem; but since then, we have developed Verifiable Credentials, which offer a tamper-proof, decentralized way for people to hold their information and biometrics themselves on their mobile devices.

The process looks largely the same: Your system captures a person’s biometric, it creates a template in the same way as normal; but instead of saving that template in a database, it is issued to the person or guardian inside a Verifiable Credential.

As the template is digitally signed, it cannot be altered without being detected. And because Verifiable Credentials use cryptography to prove who they were issued by, you can be certain the template was issued by your systems or another source you trust.

This means that when a person presents themselves for a biometric check, they also present their biometric template from their Verifiable Credential. The verifying party simply compares to see if both match without the need to store any biometric data.

And because there is no personal information stored, there are a ton of benefits: easier data privacy compliance, lower liability, no possibility of mass compromise, and more privacy, security, and control for the end user. Most importantly, you keep all the benefits of biometric authentication.

To learn more about Verifiable Credentials and Biometrics you can watch the recent Meetup Indicio hosted on biometric authentication, or reach out to our team of experts.

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post Why you need to add Verifiable Credentials to your biometric authentication systems appeared first on Indicio.


Dock

Governance Vote: Changes to the Validator Structure

As part of the planned merger of the DOCK token and blockchain with the cheqd blockchain, that DOCK token holders approved, the network is set to sunset in 2025. With this significant transition on the horizon, we are proposing proactive measures to safeguard the network from potential volatility and ensure

As part of the planned merger of the DOCK token and blockchain with the cheqd blockchain, that DOCK token holders approved, the network is set to sunset in 2025. With this significant transition on the horizon, we are proposing proactive measures to safeguard the network from potential volatility and ensure its stability during this period. 

In light of this, we are proposing an important change to the validator structure: reducing the active set of validators from 50 to 20. 

Additionally, we propose raising the self-bond requirement to 1 million tokens for any entity wishing to participate as a validator.

If you are currently among the top 20 validators, measured by total bonded amount, this change will not affect your status. However, to remain an active validator, you must be in the top 20. We recognize that this action may reduce the overall decentralization of the network in the short term. However, this is a necessary measure designed to ensure the smooth management of the sunsetting network. Once the token migration begins, we will monitor the state of the network and may need to adjust these settings again.

This proposal is now open to voting for DOCK token holders. You will have 7 days to cast your vote. If approved, the changes will be automatically enacted at the conclusion of the voting period.

Vote here: https://fe.dock.io/#/democracy

We value your participation in this important decision.


KuppingerCole

Beyond the CE Mark: How the Cyber Resilience Act Redefines Product Security

by Martin Kuppinger In three years, the familiar CЄ mark will take on a new role: signaling compliance with robust cybersecurity standards. While this might sound like just another consumer-facing regulation, it’s actually part of a much larger transformation under the EU’s Cyber Resilience Act (CRA). This legislation is not merely about putting a sticker on products; it marks a shift in how

by Martin Kuppinger

In three years, the familiar CЄ mark will take on a new role: signaling compliance with robust cybersecurity standards. While this might sound like just another consumer-facing regulation, it’s actually part of a much larger transformation under the EU’s Cyber Resilience Act (CRA). This legislation is not merely about putting a sticker on products; it marks a shift in how security is integrated into the lifecycle of everything from household devices to vehicles. If there’s software in a product, security must be built in from the very beginning—by design, not as an afterthought.

From NIS2 to UNECE R155: Security by Design

The CRA sits alongside other crucial regulations aimed at fortifying Europe’s digital ecosystem. The NIS2 Directive, for example, broadens the scope of the original NIS directive to include critical sectors like healthcare, energy, and transportation, enhancing the security of network and information systems. It enforces stricter requirements for incident reporting and proactive risk management, directly addressing today’s complex threat landscape.

Meanwhile, in the automotive sector, UNECE regulations R155 and R156 are revolutionizing how vehicles are secured. UNECE R155 requires manufacturers to implement cybersecurity management systems (CSMS) to prevent hacking and cyberattacks, while UNECE R156 ensures that vehicle software remains up to date, mandating secure over-the-air (OTA) updates. These regulations cover both new and existing models, forcing manufacturers to rethink how they protect connected vehicles throughout their entire lifecycle.

Cybersecurity Costs Hit Fiat 500

These regulatory shifts are already making waves in industry. A very tangible example is Fiat’s decision to end production of the beloved Fiat 500, after 17 years and millions of units sold. The reason is that the costs of retrofitting older models to meet the stringent UNECE cybersecurity standards, specifically R155 and R156, proved too high. Fiat is not alone; other manufacturers may also find it challenging to upgrade their legacy systems to meet new requirements, signaling the profound impact these regulations will have across the automotive sector.

The Cyber Resilience Act: Security at the Core

The CRA is part of a broader regulatory effort to ensure that every digital product—not just cars—meets strict security standards. More than a compliance measure, the CRA enforces security-by-design, a principle that requires manufacturers to anticipate and mitigate cyber threats from the earliest stages of product development. This shift has implications far beyond product safety; it also affects the entire supply chain, as vendors and partners must meet the same high standards.

No longer can companies afford to treat cybersecurity as an afterthought. It’s now at the heart of digital business, impacting not only product design but also how products are maintained and updated over time. In an era where every connected device is a potential target, this approach ensures resilience in the face of evolving threats.

Future-Proofing Digital Europe

What we’re seeing is a clear message from the EU: cybersecurity must be baked into every layer of product development and supply chain management. The CE mark may be the most visible sign of this change, but behind it lies a robust legal framework designed to safeguard the future of Europe’s digital economy. From vehicles to consumer devices, the CRA and related regulations like NIS2 and UNECE R155/R156 are reshaping how businesses design, deploy, and secure their products.

The era of retrofitting old models with new security patches is coming to an end. For businesses (and every business is a digital business nowadays), now is the time to embrace cybersecurity as a central pillar of their product strategy and corporate strategy. Anything less, and they risk not just regulatory penalties but losing the trust of consumers in a world where digital safety is paramount.


Elliptic

Crypto regulatory affairs: Singapore consults on new digital payment token licensing and compliance guidelines

The Monetary Authority of Singapore (MAS) is planning to establish strict licensing criteria for cryptoasset firms that serve an international clientele from Singapore. 

The Monetary Authority of Singapore (MAS) is planning to establish strict licensing criteria for cryptoasset firms that serve an international clientele from Singapore. 


Tokeny Solutions

RWA and DePIN: The Future of Assets and Infrastructure

The post RWA and DePIN: The Future of Assets and Infrastructure appeared first on Tokeny.
Blog 15 October 2024 RWA and DePIN: The Future of Assets and Infrastructure What is RWA?

In the blockchain world, Real World Assets (RWA) refer to tangible, physical assets with economic value, such as real estate, gold, vehicles, and art. Tokenizing these assets offers three main benefits: it opens the door for more people to invest by lowering barriers to entry, enables easy transferability—similar to sending a PayPal transaction—and allows the assets to be used in decentralized finance (DeFi) applications, such as providing liquidity in an AMM or using them as collateral to borrow tokenized cash.

To fractionize the ownership of these RWA, it often turns the assets into financial instruments. Typically, this involves creating an investment vehicle like a Special Purpose Vehicle (SPV) to hold the underlying asset. Tokenization is the process of representing ownership of financial instruments such as shares or debts of the SPV as tokens on a blockchain, allowing for digital purchase, self-custody, easy transfers, and usage of assets. These tokens represent securities and must comply with strict regulatory rules, only qualified investors meeting regulatory conditions can trade and hold them.

In most cases, ERC-20 standard should not be used for tokenizing RWA as ERC-20 tokens are permissionless, allowing the transfer to anyone without restriction. However, bearer instruments are illegal in most jurisdictions. This is where permissioned tokens using the ERC-3643 standard become vital. They ensure that only qualified users can hold them, which is crucial for compliance with regulations.

RWA market is one of the fastest growing markets in the blockchain industry that has reached an all-time high of $12 billion tokenized, according to a Binance Research report. However, this figure doesn’t fully capture the market’s scale.

“At Tokeny alone, we’ve facilitated the tokenization of more than $32 billion worth of assets onchain.”

Shurong Li, Head of Marketing at Tokeny

“At Tokeny alone, we’ve facilitated the tokenization of more than $32 billion worth of assets onchain.” Shurong Li Head of Marketing

Many of our clients choose not to make their data publicly available, as these are often private assets. Additionally, large institutions face challenges in accepting onchain cash due to regulatory uncertainty and they still prefer to invest in fiat. Given the current scale, we expect this market to grow significantly in the coming years.

What is DePIN?

Decentralized Physical Infrastructure Networks (DePIN) is an emerging concept where decentralized networks are used to manage and operate physical infrastructure. These networks include cloud services, wireless networks, sensor networks, mobility and energy networks. DePIN networks incentivize individuals to contribute to the bootstrapping phase of growth without relying on outside resources. This means individuals are incentivized by tokens to build up the supply of infrastructure without the need for centralized operators. DePIN addresses the issue that traditional centralized infrastructure, operated by corporations, requires a significant investment of time and money for both building and maintaining infrastructure, making it nearly impossible for individuals to build networks.

The main drive of DePIN systems is for Web3 companies to outsource all the building and maintenance of these network infrastructures. Take Hivemapper, for example, a decentralized digital map of the world (sensor networks). They provide users, known as “mappers” with a dashcam to drive around and capture real life images of everything they pass. This is one of the methods to build and maintain the infrastructure of this network. The incentive for the individuals contributing is earning tokens that hold monetary value, which can be redeemed to access premium map data and participate in governance decisions. The more a user contributes, the more infrastructure is built and maintained, the more tokens they receive as incentive.

What is the Difference Between the Two?

Although RWAs and DePIN both interact with the physical world, they have different use cases and operate in distinct ways. These differences include their purpose, the markets they operate in, the regulations involved, and the concept of ownership versus contribution.

RWA operates in the financial sector, involving tangible real-world assets like real estate, gold, or art that are converted into tokens representing fractionalized ownership. These tokens can be bought, sold, and traded among authorized investors.

“To ensure compliance, RWAs must strictly follow regulations, often using permissioned tokens such as ERC-3643.”

Luc Falempin, CEO of Tokeny

“To ensure compliance, RWAs must strictly follow regulations, often using permissioned tokens such as ERC-3643.”

Luc Falempin, CEO of Tokeny

The goal of RWA is to democratize the investment and ownership of physical assets, making them more accessible to a wider range of investors through tokenization.

In contrast, DePIN focuses on decentralizing the construction and maintenance of networks in the infrastructure sector. Instead of tokenizing existing assets, DePIN networks incentivize individuals to contribute physical resources such as server hosting, energy storage, and data collection. Contributors earn tokens that often provide exclusive benefits and hold monetary value in exchange for their participation. DePin faces fewer regulatory challenges since it involves contributions to infrastructure rather than ownership of assets.

On the other hand, both RWA and DePIN require onchian identity management. For RWA, onchain identity ensures compliance by verifying KYC status and guarantees unlosable ownership. Tokenized RWAs also have their onchain identity, allowing for enriching the data associated with the assets themselves. In the case of DePIN, without robust verification of devices or service providers contributing to the network, there’s a risk of payouts being claimed fraudulently, which can harm the network’s performance. This makes decentralized identity (DID) frameworks crucial for DePIN as well.

The ONCHAINID, an open-source DID framework used in ERC-3643, is an excellent solution. A verifier can conduct necessary checks and issue verifiable credentials as proof. This ensures that only properly functioning devices and valid contributions to the network are recognized, maintaining the integrity and sustainability of the system and enhancing its overall performance.

What is the Opportunity to Make the Two Work Together?

Combining RWA and DePIN presents a significant opportunity to transform both financial investments and infrastructure development. Together these sectors can push forward growth and innovation in the form of a hybrid ecosystem.

Tokenization of Infrastructure: Co-Ownership of DePIN Devices: The real-world assets, devices, such as renewable energy systems or critical IoT infrastructure, can be costly for individual investors. By tokenizing the ownership, for example through the units or shares of a fund that will invest in one or many DePIN devices, people can co-own multiple DePIN devices. The key benefits are listed below.

Improved Accessibility: Allowing individuals to co-own expensive infrastructure devices. This opens up investment opportunities for a wider pool of participants, making it possible for people to co-own high-value assets like solar panels or data nodes. Enhanced Transferability: Unlike physical devices, which can be difficult to sell or exchange, fractional ownership can be more easily traded. Moreover, tokenization enables peer-to-peer transfer to enhance transferability and eventually increase liquidity of the assets. New Opportunities and Stability of DePIN Network: Beyond just owning a piece of the infrastructure, the tokenized shares can also be used in DeFi applications. Investors can provide liquidity, stake, or use these tokens as collateral to generate additional financial yield, unlocking even more value from their co-ownership in the infrastructure assets while not needing to sell the devices to ensure stability of the DePIN Network. Conclusion

In conclusion, RWAs and DePIN, while distinct in their purpose, share common ground in turning the physical world digital. The opportunity to combine these concepts opens the door for innovative applications in finance, infrastructure, and decentralized economies, creating more accessible, efficient, and resilient systems for managing physical assets and infrastructure globally. As blockchain technology continues to evolve, the synergy between RWAs and DePIN could be crucial in shaping the next wave of decentralization.

SUBSCRIBE TO OUR INSIGHTS Institutional Tokenization 3.0: Break Silos 21 October 2024 RWA and DePIN: The Future of Assets and Infrastructure 15 October 2024 AMA-AMBIOGEO Tokenizes $4.6 Billion Gold Reserves with Tokeny 24 September 2024 21X and Tokeny Collaborate to Expand Global Liquidity and Tradability of Tokenized Real-World Assets 10 September 2024 Amsterdam Teambuilding Fuels Our Mission for Open Finance 6 September 2024 ShipFinex and Tokeny Forge Strategic Partnership to Revolutionize Maritime Asset Tokenization 3 September 2024 Transaction Privacy: The Last Blocker for Massive Open Finance Adoption 1 August 2024 Hex Trust and Tokeny Partner to Accelerate Institutional RWA Tokenization 31 July 2024 MOCHAX and Tokeny Partner to Provide Unprecedented Values to Equity Investment through RWA Tokenization 18 July 2024 Tokeny Expands Chain Support to IOTA EVM 4 July 2024

The post RWA and DePIN: The Future of Assets and Infrastructure appeared first on Tokeny.


Dark Matter Labs

#1 Are we coding too soon? — Day 3

#1 Are we coding too soon? — Day 3 This blog is the first in a series documenting the Re:Permissioning the City(PtC) product development journey. In the spirit of “working out loud”, the series aims to share our ongoing progress, learnings, and reflections on building a digital permissioning system designed to unlock underutilised spaces in the city for civic use, through introducing partici
#1 Are we coding too soon? — Day 3

This blog is the first in a series documenting the Re:Permissioning the City(PtC) product development journey. In the spirit of “working out loud”, the series aims to share our ongoing progress, learnings, and reflections on building a digital permissioning system designed to unlock underutilised spaces in the city for civic use, through introducing participatory and distributed forms of spatial governance.

Day 3: Product scoping — balancing strategy and feasibility

On the third and fourth days of the workshop, we started sketching wireframes based on the user journey. This required merging the two scenarios we developed, creating a coherent flow, and listing out both the technical and UI/UX requirements.

Once we layed out the entire journey, we quickly realised that a significant part of what we were building was in fact quite similar to a typical booking platform. There were two parts of the system that were interdependent: 1) a booking system that allows a user to list spaces and book events, and 2) a new permissioning system that introduces alternative ways to approve bookings, allows users to be part of permissioning groups and create rules.

Due to restrictions of time and capacity (4 months of development time), we had to prioritise, which meant we had to decide which part of the system we were going to build.

Through some debate, we came up with three potential strategies and decided to choose one.

Maximise Experimentation
In this approach, we aimed to minimise the necessary development efforts, particularly for features already common in the market. By doing so, we could redirect our development capacity towards creating interactive prototypes that facilitate permissioning experiments. This included exploring scenarios such as “How would a liability group come together?” and “How would this permissioning group share decision-making responsibilities?” (Focusing on permissioning system)

Risks:

There is a risk that funders may not support this approach, as the booking system may not be fully functional. The development timeline could be delayed because the prototype is not fully specified yet, meaning the development team would need to wait until the prototype design is finalised.

Opportunities:

Focusing on experimentation benefits future pathway building especially innovation funding

Maximise Potential for Real Users
This strategy recognises the significant impact of involving real users at the end of the process. We proposed developing a functional booking system while continuing to explore the formation of liability groups and the permissioning mechanism through design workshops. (Focusing on booking system)

Risks:

Misalignment with the broader Dm identity and portfolio: investing too much time in developing features that are already available in the market may not align with Dm’s vision and strategic goals. Deadline Pressure: even if we allocate all development capacity to building a fully functioning system, we may still struggle to meet the 4-month deadline. Project objectives: We want to validate our concept around permissioning through this first phase, and we cannot do that by building a booking system. Funding risks: it depends on what kind of funding we are going for, but innovation funders will want to see the innovation.

Opportunities:

Foundation for future experimentation: Developing functional software provides Dm with solid tools and a platform for future experimentation. High-quality delivery: This approach ensures that we deliver a fully functional system, likely to perform well in assessments. User ready: If we can have real users, we can apply for other types of funding e.g. specifically for product development

Interoperable Permission Engine
In this direction, we focused on the importance of a fully functional user flow while dedicating our development efforts to creating versatile digital tools for experimentation. This includes developing an interoperable permissioning plugin compatible with existing booking systems. (Focusing on permissioning system)

Risks:

Can end up developing all the fullstack systems for the MVP. It’s uncharted territory so we might underestimate the development time needed.

Opportunities:

Can focus on building more value aligned outcomes. Can acquire a broader range of potential users than running a single platform ourselves. Can provide more dynamic types of usage by having some flexibility on scopes of entities who host the permission engine.

We ended up choosing the third option, which was suggested by our backend developer Donghun. The reason why we documented this lengthy debate and decision-making process is because it triggered a lot of critical, fundamental questions and areas for clarification.

What does product development in Dark Matter Labs look like?

Triggered by the debate around feasibility and vision, we had a chance to reflect on the tensions caused by different priorities. As a collective of individuals primarily trained in architecture, design, and policy, Dark Matter Labs as an organisation doesn’t resemble a typical tech start up. Then what does a product development journey look like in our unique context? How is the product that we are striving to develop different, and how should the development journey be adapted to work in our current team dynamics, without compromising delivery? We don’t assume that we would be able to answer these questions right now, but we document here some of the reflections that emerged in our conversations during the workshop.

How is our product different?

We all agree that Dark Matter Labs is not a tech start-up trying to make a product that responds to market demands. We are more of a strategic design and research lab interested in elucidating systemic problems and developing experimental products that can provoke, and perhaps, solve some of these fundamental issues. In recent years, we’ve moved beyond crafting narratives that provoke thought, to actually building products that do both — provoke and solve problems. Circulaw is a good example of such a product built with actual users in mind. Having developers on the team who were involved in building products like Circulaw (and other market-ready solutions) gave us the opportunity to raise critical questions.

Product development at Dm presents unique challenges and opportunities, particularly when addressing systemic issues rather than simply filling market gaps or meeting unmet needs. Can we really build a product that addresses the problem of ownership and centralised governance? How far can we go in embedding our critical (but speculative) ideas into a product? Will people even understand and appreciate it? (Even our blogs are notoriously difficult to read). Who is our primary audience or user? Building a product that requires significant upfront resources and diverse capabilities compels us to answer these questions from the outset.

Are we coding too soon?

During the workshop, we had an opportunity to reflect critically on our current approach to transitioning projects into products, particularly how this process affects developers within Dark Matter Labs. One key takeaway was the importance of having a robust paper prototyping phase to validate key concepts and hypotheses before coding begins (tensions could emerge when project holders underestimate the labour of coding — and the labour of having to re-write it). This phase, alongside thorough user interviews and testing, would help refine smaller details early on. From a developer’s perspective, it’s much easier to focus on how to build something if the what has already been clearly defined. As Donghun pointed out, getting these what questions sorted beforehand allows developers to focus on building a product with technical integrity, without worrying about shifting goals.

There are definitely advantages to loosely structured projects within Dm which have been our default pattern — the ability to adapt to changing contexts, being open to radical iteration — but product development requires a different level of investment and nature of collaboration, which in turn demands new structures and practices. Perhaps it’s useful to clearly distinguish the paper-prototype phase supported by workshops, before attempting to start building digital prototypes.

We also realised there was room for improving how strategic designers and developers work together. How can we ensure smoother handovers from concept to execution? Developers thrive when they work on projects with real-world applications — projects that go beyond one-off workshop tools and are sustained long enough to generate meaningful data for future iterations. This sense of continuity and contribution is crucial for developer growth. Ideally, we envision a scenario where designers and developers co-create provocative projects that go live to meet real user needs, operating for a sufficient time to gather the data necessary for iteration and future improvements. This way, developers get a sense of growth and contribution, knowing their work has a lasting impact.

Wrapping up the workshop and looking ahead

This concludes the documentation of our first in-person workshop focused on product scoping. It wasn’t a very structured workshop at the beginning, but we managed to build the necessary structures and processes that allowed us to move to the next stage.

Defining the horizons Deliberating on core principles and values of the product (more suggestions collected throughout the week) Designing two types of scenarios and user journeys Merging two scenarios into one user journey and sketching paper wireframes Prioritising what to develop/code Discussing pathway strategies Ideating around branding/identity Identifying questions for the future (collected throughout the workshop)

These were some of the concrete steps we took, with countless conversations in between. As we move on to the next phase of production, we hope that this documentation will serve as a template for teams that are looking to explore (digital) products — bridging strategic design and product development, and making the move towards transitioning projects into products.

Lastly, we share some questions that we identified throughout the workshop, which we have ‘parked’ for now.

Do we need everything to be decentralised? How far does decentralisation go? What kind of deliberation and decision-making model would the permissioning group adopt? E.g. consensus-based, and what is the reasoning? How do we help space stewards(permissioning group) shape the rules of the space? What kind of facilitation is needed? Will financial values be generated by spaces? How do we deal with financial value without encouraging rent-seeking behaviours? How could Horizon 1 look different from the current system (while still operating within existing systems) How to convince cities of new ways of doing things? What is “functionality” for research grant funders? And how do we best meet their expectations regarding tech products? (especially funders who are not typical product development funders)

Read Day 1: Transitioning from project to product

Read Day 2: User journey and scenario building

This blog was written by Eunsoo Lee in conversation with the core team of Permissioning the City and utilising the records of the workshop.

Team members who contributed to the workshop (in alphabetical order):
Calvin Po, Donghun Ohn, Eunji Kang, Eunsoo Lee, Fang-Jui ‘Fang-Raye’ Chang, Hyojeong Lee, Shuyang Lin, Theo Campbell

Wider advisory group:
Indy Johar, Hee Dae Kim, Gurden Batra, Charlie Fisher

Partners and funders:
NIPA(National IT Industry Promotion Agency), P4NE(Partners for a New Economy), Parti

#1 Are we coding too soon? — Day 3 was originally published in Permissioning the City Product Journey on Medium, where people are continuing the conversation by highlighting and responding to this story.


#1 User journey and scenario building — Day 2

#1 User journey and scenario building — Day 2 This blog is the first in a series documenting the Re:Permissioning the City(PtC) product development journey. In the spirit of “working out loud”, the series aims to share our ongoing progress, learnings, and reflections on building a digital permissioning system designed to unlock underutilised spaces in the city for civic use, through introduc
#1 User journey and scenario building — Day 2

This blog is the first in a series documenting the Re:Permissioning the City(PtC) product development journey. In the spirit of “working out loud”, the series aims to share our ongoing progress, learnings, and reflections on building a digital permissioning system designed to unlock underutilised spaces in the city for civic use, through introducing participatory and distributed forms of spatial governance.

Day 2: User journey and scenario building

On days two and three, we focused on developing the user journeys. The emphasis was placed on creating tangible, realistic use case scenarios which would help us identify the gaps in our concept and challenge where we might be relying too much on theory and assumptions.

We created a template that divides the system into front stage(frontend) — covering user actions and visible interfaces — and back stage(backend), which handles the behind-the-scenes logic and processes supporting these interactions. We also listed some choices for scenario building, such as types of permissions, users, and spaces.

Feel free to adapt our template

We decided to prioritise the event organiser and space stewards (space owners, managers, broader stakeholders like neighbours) and split up into two groups, with one group focusing on a scenario around a music event, which deals with pre-defined/automated permissions and an exception approval case, and the other on a food related event that deals with bespoke permissions. We chose music and food specifically as these scenarios are likely to introduce tensions or conflicts. Noise level issues would allow us to explore how we might use sensors to verify and give real-time feedback to space users, preventing the escalation of conflict, and food/cooking would allow us to dig deeper into the liability mechanisms around fire risks, safety and hygiene.

Group 1 (left) on food/cooking and Group 2 (right) on music

Through the user journey exercise, we were able to clearly distinguish the differences between the three types of permission processes: pre-defined/automated, exception-based, and bespoke permissions.

Pre-defined/automatic
When requesting permissions to use a space, users will be able to choose from an existing template of rules. Template A might be suitable for loud music events with more than 50 people, template B might be suitable for small cooking classes, and template C for book clubs. These templates of rules (or rulebooks) can be initially drafted based on general liability considerations (number of people, types of activity, etc), and further adapted and modified through usage in a particular space. The idea is based on a precedent-based model, similar to case law: if permission has been granted previously for a kind of event without issues, similar future events will also be automatically permitted. For most events that can be classified under certain types of activities, these pre-approved templates will enable instant permissions, simplifying and speeding up the booking process.

Exception-based
Exceptions are cases where users might request small exceptions which need human approval. In one of our scenarios, Pim, who was hosting a religious worship event involving music performance, wanted to request to increase the maximum noise levels allowed. This involves modifying a single clause in one of the template of rules. The request is processed by a ‘permissioning group’ — a group of people who have opted-in to act as a steward of a particular space, who have the responsibility to partake in decision-making as well as maintaining the space. The members of the permissioning group make a consensus based decision, whether to approve or disapprove this exception. They are also made aware that the adapted permission they grant will also become a template for future events.

Bespoke permissions
Bespoke permissions are reserved for rare cases where users are requesting permissions for a completely new type of activity which does not fit into any of the pre-approved templates. In our scenario, this was a proposal for a large local produce market held in a park. A request for bespoke permissions triggers a slightly more complex set of actions than the exception-based permissions. The event organiser will be prompted to construct a new template of rules based on the activity proposed. They will also fill out a self risk assessment form indicating their concerns and what they are excited about (the pros and cons). Submitting this request will trigger a notification to the permissioning group who will be given a due date to arrive at a consensus to approve or reject the new template. Once accepted, the event is permitted, and also future events with the same characteristics can use the template for automatic permission.

Principles and values

Through the user journey exercise which compelled us to sketch out the details of each action and process, we were also able to define the basic principles of the platform which reflect the underlying logic and values of our concept. They will be used to further define key concepts like the permissioning group, template of rules, feedback systems, incentives and liability mechanisms, and so on. These principles and values can be considered as version 1, which will be iterated later when we have more experience to draw upon.

Governance

Power and liability as inextricably linked: If you want to make decisions you need to share liability i.e. skin in the game

Prioritise proximity to space and physical presence (linked to shared risk and liability) Giving away power is giving away liability (which is why space owners might want to share decision making/permissioning power)

Permissioning based on precedents (like case law)

Every space starts with basic template of rules which are iterated thereafter Everything is allowed (within legal limits) until something happens to change the rules Templates need to be updated regularly (time limited templates)

Permissions are peer reviewed (e.g. permissioning group)

Permissioning group performs the role of space stewards — responsible for maintaining permission templates, approving bespoke permissions Anyone can join a permissioning group Initial permission group can be formed through a combination of invitations (based on shared liability holders) and self opt-in through shared interests Deliberations within permissioning group prioritise consensus building — through dialogic processes (rather than majority rule) Permissioning group participants are given a choice to opt-out of a particular decision

Incentives

Prioritise system level of risk and benefit sharing — to avoid rent seeking behaviours Prioritise generating system level of incentives/benefits (rather than personal/individual)

Feedback

Based on incentives and positive feedback at the system level rather than penalties and punishment at the individual level Encourage feedback on rules/permissions not people and their conduct

Technology

We adopt technology not to maximise efficiency and profit, but to enable greater flexibility and freedoms. We acknowledge that technology could be exclusionary, and while we may not be able to address this immediately, we are committed to designing systems that prioritise inclusivity and accessibility. By embracing open standards and decentralisation, we aim to create tools that empower communities rather than control them.

Read Day 1: Transitioning from project to product

Read Day 3: Are we coding too soon?

This blog was written by Eunsoo Lee in conversation with the core team of Permissioning the City and utilising the records of the workshop.

Team members who contributed to the workshop (in alphabetical order):
Calvin Po, Donghun Ohn, Eunji Kang, Eunsoo Lee, Fang-Jui ‘Fang-Raye’ Chang, Hyojeong Lee, Shuyang Lin, Theo Campbell

Wider advisory group:
Indy Johar, Hee Dae Kim, Gurden Batra, Charlie Fisher

Partners and funders:
NIPA(National IT Industry Promotion Agency), P4NE(Partners for a New Economy), Parti

#1 User journey and scenario building — Day 2 was originally published in Permissioning the City Product Journey on Medium, where people are continuing the conversation by highlighting and responding to this story.


#1 Transitioning from project to product — Day 1

#1 Transitioning from project to product — Day 1 This blog is the first in a series documenting the Re:Permissioning the City(PtC) product development journey. In the spirit of “working out loud”, the series aims to share our ongoing progress, learnings, and reflections on building a digital permissioning system designed to unlock underutilised spaces in the city for civic use, through intro
#1 Transitioning from project to product — Day 1

This blog is the first in a series documenting the Re:Permissioning the City(PtC) product development journey. In the spirit of “working out loud”, the series aims to share our ongoing progress, learnings, and reflections on building a digital permissioning system designed to unlock underutilised spaces in the city for civic use, through introducing participatory and distributed forms of spatial governance.

In June 2024, we received good news from one of the many applications we submitted to develop the Re:Permissioning the City platform. This specific grant awarded by the National IT Industry Promotion Agency (NIPA) of South Korea, allowed us to spend the next 5 months developing the first digital prototype. Having spent the last 3 years developing the concept through small research grants, we were overjoyed to finally have the opportunity to start building something tangible.

Once we assembled the team, composed of three developers, a graphic designer, and four strategic designers, we gathered in London for a week-long workshop. Looking back, it was an ambitious, high-stakes plan that required turning theory into a concrete product design in a matter of 5 days. We were betting on our combined ‘collective intelligence’ to figure out this challenge together.

Day 1: Defining the problem space and scope of our intervention

Like any ‘design & innovation’ project, we started by collectively defining and narrowing down our area of intervention. We did this through discussing the problem space, our objectives and value proposition, and through defining the various ‘horizons’ of the product we were setting out to build.

Problem space

Fairness in allocation of spaces: in the case of Daegu and other public/government-owned spaces, the current process for allocating shared spaces is seen as unfair. For example, a simple first-come-first-served approach often fails to prevent hoarding of use rights (whoever has more resources to submit applications has a higher chance of gaining rights). Seen in the case of the public square in front of Seoul City Hall where right-wing Christian groups deliberately submit applications ahead of LGBTQI+ organisations to prevent them from hosting the queer festival, existing rules can be abused to discriminate against certain groups, which challenges fairness and ethics of existing governance models. Fairness in decision-making: existing governance around spaces are centralised and opaque. Either controlled directly by space owners, or rules set by intermediary organisations entrusted to manage spaces. Ordinary ‘users’ of spaces and other stakeholders (neighbours and others who have a stake) are almost always excluded from the rule-making and permissioning process. Public value captured in private wealth: we challenge rent-seeking, private ownership models, where 1) public spaces are used to generate private wealth or 2) value generated by the public e.g. rehabilitation through community activities gets captured solely by land/space owners. The focus is on ensuring that public spaces are used in a way that benefits the community rather than being a source of income for private entities. Decision-making based on individual interests: we advocate for a decentralised commons-based approach to decision-making. The use of spaces in the city is rarely a concern for the property owner alone. Rather, how spaces are used will affect third parties in positive and negative ways, and also the health of the city as a collective whole. This means decisions on how spaces are used should be made collectively, considering the public or commons’ good rather than individual or organisational interests. The idea is to create a system where the use of space benefits the broader community. Underutilisation of spaces: the current approach to managing public spaces is bureaucratic, which creates barriers to access and results in underutilisation. Even when spaces are managed by single entities (often NGOs and civil society organisations with a specific mandate), it takes a lot of resources to maximise utilisation, costs they often cannot afford. Barriers to access: it’s not easy for the average citizen to find spaces to do stuff — often, spaces are hard to find (no central database), and then there are restrictions on types of use which can be difficult to navigate. Rules are restrictive: existing rules around spaces (what you can do and not do) tend to be overly conservative, geared towards preventing potential conflicts. When people want to ask for bespoke permissions (if their activity does not fit into existing types of use), existing booking systems lack processes to easily process these requests, instead reverting to ad-hoc, off-platform negotiation (or outright rejection). We need different kinds of rules and methods of negotiation that can ‘liberate’ spatial use, to accommodate more flexible and creative uses of space.

Hypothesis

Our hypothesis is that creating a system that enables easier (and democratic) access to public space for communities will remove barriers for people wanting to organise activities that generate social/cultural capital and public value. This will result in increased civic activity in a city (especially key for cities experiencing demographic/economic/social decline), which has broader societal benefits (reduced isolation, better mental health, less division).

How is what we are building different from conventional booking systems? Why is this way of doing things better?

Democratic: it opens up decision-making/rule-making around shared spaces to a wider range of stakeholders, and by encouraging peer review/approval process, contributes to building democratic capacities. Legitimacy and consent: peer reviewed permission process allows us to gather people’s consent for activities that might not have been possible before. The net effect is that more events can happen in the city (with legitimacy) because we have a more effective way of revealing and implementing the views of the population. Mission-driven: allowing space owners and citizens to make social impact easier rather than just maximising profit from rent-seeking activities. Power distribution and liability sharing: liability and power are interlinked, which means if you have skin in the game, you get to participate in decision-making. The idea is to transition away from ‘externalities’, where negative impacts of an individual’s decisions can be displaced onto the commons. Open source: we are building an interoperable open source tool that people can fork and integrate into their existing systems. Distribution of value: financial value derived from a space (e.g. increase of property prices) is often hoarded by land/space owners. We will try to measure non-financial value generated from civic activities, as well as distribute financial value across more stakeholders.

Horizons scoping

Typically, product teams will create a product roadmap. However, we decided to take a different approach, coming from a strategic design perspective. The key difference between a product roadmap and horizons scoping is that the former is execution focused, while the latter is focused on identifying and assessing different “horizons” or stages of future opportunities, challenges, and strategic goals over a longer period of time. In practice, we adapted elements of both — focusing on describing the hypotheses we wanted to test, while leaving room for uncertainty and more radical imaginations in Horizon 3 as an intended direction of travel.

Horizon 0 reflects the status quo, Horizon 1 is the scope which is narrowed down considerably to fit the timeline and expectations of the 2024 prototype grant. Horizon 2 reflects what we aim to build as the first full product released to the public, and finally Horizon 3 is a description of where our ambitions lie in the future. What we managed to map out during the workshop is in no way complete — in fact the process of mapping alerted us to critical gaps, such as the question of business models and incentive mechanisms, all of which will need to be defined further. But we share this as a snapshot of our thinking at stage 1 of the development journey.

Read Day 2: User journey and scenario building

This blog was written by Eunsoo Lee in conversation with the core team of Permissioning the City and utilising the records of the workshop.

Team members who contributed to the workshop (in alphabetical order):
Calvin Po, Donghun Ohn, Eunji Kang, Eunsoo Lee, Fang-Jui ‘Fang-Raye’ Chang, Hyojeong Lee, Shuyang Lin, Theo Campbell

Wider advisory group:
Indy Johar, Hee Dae Kim, Gurden Batra, Charlie Fisher

Partners and funders:
NIPA(National IT Industry Promotion Agency), P4NE(Partners for a New Economy), Parti

#1 Transitioning from project to product — Day 1 was originally published in Permissioning the City Product Journey on Medium, where people are continuing the conversation by highlighting and responding to this story.


Innopay

Mounaim Cortet to share insights on FiDA at Mobey Forum’s Amsterdam member meeting

Mounaim Cortet to share insights on FiDA at Mobey Forum’s Amsterdam member meeting from 19 Nov 2024 till 20 Nov 2024 Trudy Zomer 15 October 2024 - 08:59 Amsterdam, the Netherlands Mounaim Cortet, Vice-President of INNOPAY, will be speak
Mounaim Cortet to share insights on FiDA at Mobey Forum’s Amsterdam member meeting from 19 Nov 2024 till 20 Nov 2024 Trudy Zomer 15 October 2024 - 08:59 Amsterdam, the Netherlands

Mounaim Cortet, Vice-President of INNOPAY, will be speaking at Mobey Forum’s Amsterdam Member Meeting, hosted by ING, on 19-20 November. The event will focus on key themes such as API monetisation, the EU’s Financial Data Access (FiDA) regulation, Embedded Finance and more.

Mounaim will share his insights on the strategic implications of FiDA, the challenges and considerations regarding FiDA schemes, and the strategic responses and opportunities for FIs. He will be joining an impressive lineup of speakers, including:

Katleen Van Gheel, Global Head of Innovation, ING Hetal Popat, Director of Open Banking, HSBC Joris Hensen, Founder and Co-Lead, Deutsche Bank API Programme Vjekoslav Bonic, Head of Digital Channels & AI, Raiffeisen Bank International AG Gijs ter Horst, COO, Ximedes Patrick Langeveld, Open Banking Expert, ING

 

This event is open exclusively to Mobey Forum members, who include industry leaders, fintech professionals and Open Banking experts. If you’re a Mobey Forum member, don’t miss this opportunity to hear from the top voices in the industry. Register now in the Mobey Forum’s Online Member Community to secure your spot.

 

 


Ontology

Revolut’s Fraud Dilemma: Why Decentralized Identity Is the Real Answer

In a world that’s rapidly shifting to digital-first everything, banks like Revolut have redefined how we manage money. Instant transfers, real-time currency exchange, seamless app experiences — all with a sleek interface. But for Jack, a business owner who had £165,000 stolen in under an hour, this digital convenience has become a nightmare. And this story highlights one glaring question: Are cent

In a world that’s rapidly shifting to digital-first everything, banks like Revolut have redefined how we manage money. Instant transfers, real-time currency exchange, seamless app experiences — all with a sleek interface. But for Jack, a business owner who had £165,000 stolen in under an hour, this digital convenience has become a nightmare. And this story highlights one glaring question: Are centralized financial systems like Revolut’s really equipped to protect us in the digital age?

Jack’s story is unsettling. It started with a simple phone call from a scammer posing as Revolut. A few security codes later, his entire business account was drained. But this wasn’t just Jack’s mistake. Revolut’s systems failed him. They didn’t flag 137 payments to three new payees in an hour as suspicious, and by the time Jack reached out, he had lost £67,000 more due to the 23-minute delay in freezing his account. Revolut has refused to refund him, and they’re not alone — 10,000 fraud reports last year flagged Revolut as the culprit, more than any major high-street bank.But what if this entire scenario could have been avoided — not with better fraud detection, but by rethinking how we handle identity verification and financial transactions altogether? Enter decentralized identity, the future of Web3 security, powered by solutions like ONT ID from Ontology.

The Case for Decentralized Identity

Revolut, like most traditional financial systems, uses centralized identity systems to verify who you are. This means your personal information — passwords, codes, biometric data — is stored and managed by a single company. If that system is compromised, as Jack’s was, you’re left exposed, and recovering your losses becomes a bureaucratic nightmare. That’s exactly what happened in Jack’s case. Fraudsters bypassed facial-recognition software and hijacked his account. The fact that Revolut didn’t even have a stored image of the fraudsters who authorized the theft shows the cracks in the system.

Decentralized identity flips this model on its head. With ONT ID, users don’t need to rely on a single institution to prove their identity. Instead, you are in control of your identity, managing it through a decentralized system that uses blockchain technology to verify your credentials securely. This self-sovereign identity model means your personal data is no longer centralized, reducing the risk of massive data breaches or fraud.

How ONT ID Could Have Prevented This

Imagine if Jack had been using a decentralized identity solution like ONT ID instead of Revolut’s traditional system. Here’s how it could have been different:

No Centralized Control: Jack’s identity wouldn’t have been stored on a vulnerable centralized server, reducing the risk of fraudsters gaining access through phishing attacks or bypassing ID verification software. Zero-Knowledge Proofs: ONT ID is able to implement a zero-knowledge proofs type system, which means Jack could have verified his identity without exposing any sensitive personal information. The scammers wouldn’t have had enough data to initiate the theft in the first place. Real-Time Security Checks: ONT ID could have flagged any unusual activity in real time through its decentralized network, potentially freezing Jack’s account the moment fraud was detected — long before 137 payments were processed. Decentralized Finance Meets Decentralized Identity

This isn’t just a problem for Revolut; it’s an issue for any centralized institution dealing with financial transactions. Fraudsters are always evolving, looking for ways to exploit these systems. Web3, with its emphasis on decentralization, offers a more secure future. With decentralized finance platforms on the rise, the integration of decentralized identity solutions like ONT ID becomes crucial.

Jack’s story is a cautionary tale of how fragile centralized financial systems can be, especially when they’re more focused on growth than security. But the solution is right in front of us: By embracing decentralized identity, we can build a future where individuals like Jack are in control of their own data and financial security. And that future isn’t some distant vision — it’s here, with tools like ONT ID leading the charge.

Ready to help build a more secure, decentralized future? With Ontology’s DID Fund, you can be part of the solution. We’re supporting innovators and developers who are driving the next generation of decentralized identity, privacy, and security.

Whether you’re passionate about blockchain, self-sovereign identity, or protecting users from fraud, the DID Fund can help you turn your ideas into reality. Apply today and join the movement to transform how we control and protect our personal data in Web3.

Start your journey at ont.id/did_fund.

Revolut’s Fraud Dilemma: Why Decentralized Identity Is the Real Answer was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


TBD

Our California DMV Hackathon Win: Privacy-Preserving Age Verification

Learn about our winning prototype for instant age verification within Square's Point of Sale system.

At the recent California DMV Hackathon, the Block team, represented by members from Square and TBD, won the Best Privacy & Security Design award for building a prototype of an instant age verification system. This solution utilizes mobile drivers’ licenses (mDLs) to provide secure, privacy-centric transactions for age-restricted purchases with Square’s Point of Sale (POS) system.

In this post, we’ll explore the core technical components behind our solution, which centered on using TruAge technology to enable seamless, secure age verification.

How TruAge QR Code Verification Works

At the heart of our prototype is the ability to scan and verify a TruAge Age Token QR code. These QR codes contain a verifiable credential (VC) that confirms a person’s legal age without exposing unnecessary personal information. Here’s a breakdown of how we approached verifying these credentials in our solution.

Decoding the QR Code Payload

The first step in the verification process was reading the QR code provided by the customer. TruAge QR codes follow a standard format which encodes the verifiable presentation (VP) in a compact CBOR format.

Our team implemented a scanner using our open source web5-swift SDK that reads the QR code and decodes the CBOR-encoded payload. This CBOR format is efficient, allowing the verifiable presentation to be transmitted and processed quickly, minimizing any delays at the point of sale.

Converting CBOR to JSON

Once we decoded the CBOR data, the next step was to parse it into a JSON-based verifiable presentation using the W3C Verifiable Credentials (VC) Data Model v1.1. This model is critical to ensuring interoperability across different platforms and services, as it standardizes how credentials are represented and exchanged in a decentralized manner.

Validating the Issuer’s DID

After converting the data into a verifiable format, we needed to validate the digital signature on the credential. We retrieved the issuer’s Decentralized Identifier (DID) from the TruAge server, which provided us access to a sandbox environment containing their list of authorized DIDs.

Using DIDs, we were able to validate the cryptographic signature to ensure that the credential was issued by a trusted TruAge provider. This validation step is critical for ensuring that the credential has not been tampered with and is issued by a legitimate authority.

Credential Content Verification

Once the issuer’s signature was validated, the next step was to check the contents of the verifiable credential itself. In this case, we looked for proof that the individual was over 21 and verified that the credential had not expired.

This lightweight verification process ensures that businesses can quickly and easily confirm a customer’s legal age, while protecting their privacy by not exposing sensitive information like birthdates or addresses.

Building the Integration: Web5 and TruAge Libraries

To bring this solution to life, we used a few key technologies:

iOS: Our team developed the iOS implementation using the web5-swift library, which allowed us to efficiently handle the scanning, decoding, and parsing of the TruAge QR codes on Apple devices. You can explore the code here: web5-swift TruAge Credentials.

Android: For Android, we modified the TruAge library provided by Digital Bazaar to make it compatible with our solution. This involved adapting the library for seamless integration with our QR code parsing and validation logic. The code for this can be found here: TruAge Java VC Verifier.

Privacy and Security at the Forefront

Our approach ensures that personal information is protected at every stage of the transaction. By focusing solely on verifying the specific data point needed (in this case, whether someone is over 21), we avoid collecting or storing any unnecessary information. This is a win for both businesses and consumers, as it minimizes risk while maintaining a smooth user experience.

By integrating this technology into Square’s Retail POS system, we not only enhanced security but also brought innovative, privacy-preserving solutions to small businesses that need to comply with age verification laws. This prototype has the potential to extend to many other use cases, from secure employee onboarding to identity verification for suppliers and customers.

What’s Next?

Participating in the California DMV Hackathon is just the beginning of our efforts to drive adoption of mobile drivers’ licenses and secure age verification solutions. Our work continues in collaboration with the California DMV and other industry partners as part of the NIST consortium, aimed at standardizing and scaling mDLs across the United States.

Join us on October 22 for a live Show & Tell of the prototype!

Monday, 14. October 2024

HYPR

Top 15 Cybersecurity Regulations for Financial Services in 2024

Financial services are one of the most targeted industries in the world for cyberattacks, suffering nearly 20% of all attacks in 2023. This is understandable considering the high-value outcomes of successful attacks and the fact that, despite supposed security improvements, attacks are still relatively successful, with 84% of finance organizations hit by a cyberattack going on to experi

Financial services are one of the most targeted industries in the world for cyberattacks, suffering nearly 20% of all attacks in 2023. This is understandable considering the high-value outcomes of successful attacks and the fact that, despite supposed security improvements, attacks are still relatively successful, with 84% of finance organizations hit by a cyberattack going on to experience at least one breach.

Data breaches don't just affect the institution that's compromised but also affect confidence in the sector as a whole. The International Monetary Fund has highlighted the significant threat that weak financial services cybersecurity poses to the industry and the world. Potential outcomes range from a loss of confidence in financial services to widespread economic instability.

That's why global cybersecurity regulations have been ramped up over recent years, as they strengthen the security posture of individual firms and the industry overall. Here we'll look at the most important financial services cybersecurity regulations for 2024 and beyond.

New York — NYDFS Part 500

One of the US's most important pieces of cybersecurity legislation is the New York Department of Financial Services cybersecurity bill, technically known as 23 NYCRR Part 500. Enacted in 2017, the bill affects any firm that operates under the banking, insurance or financial services laws out of New York, which are most financial services firms in the US.

It requires firms to implement a cybersecurity policy over data governance, access controls and consumer privacy. It also obligates the introduction of more robust security methods, such as the deployment of multi-factor authentication for protecting non-public information, according to the NYDFS MFA requirements.

In November 2023, it added amendments, requiring firms to: 

implement access and privilege management  institute quarterly reporting to the board by the CISO increase the scope of incident reporting to include cybersecurity events such as ransomware  administer annual risk assessments  conduct annual cybersecurity awareness training that focuses on ransomware and social engineering   conduct vulnerability management that includes annual penetration testing 

In addition, the new amendment mandates that firms implement multi-factor authentication (MFA) for remote access and privileged accounts by November 2024. 

Upcoming Compliance Requirements 

By May 1, 2025, financial institutions must review access privileges for all users with access to sensitive information. This includes automated scans of information systems to identify vulnerabilities and manual review of systems that are not covered by automated scans. 

By November 1, 2025, organizations must develop and maintain a comprehensive asset inventory of their information systems that includes key information tracking (e.g, owner, location, etc), policies for updating the asset inventory, and the procedure for disposing of information. ​

Pro tip: Consider implementing passwordless, phishing-resistant MFA, based on FIDO standards, to ensure that only cryptographically verified identities can access sensitive financial systems and prevent phishing attacks. These technologies can help companies improve compliance with stringent and evolving regulatory requirements such as NYDFS Part 500.

US — Gramm-Leach-Bliley Act (GLBA)

The GLBA has a specific Privacy of Consumer Financial Information Rule that directly affects financial services cybersecurity. This concerns non-public personal information (NPI) that a company will collect when informing about or providing a financial product or service. Fines for non-compliance can be up to $100,000 per violation and five years in prison for complicit directors.

US — Sarbanes-Oxley (SOX)

The original Sarbanes-Oxley Act was instrumental in codifying the disclosures companies must make to current or potential investors, as well as the penalties that are due for breaches (with executives being directly on the line for up to $1 million and ten years in prison). 

It has since been updated to include cybersecurity considerations. It now obligates all publicly traded companies in the US and their wholly-owned subsidiaries to declare adherence to cybersecurity best practices in areas such as authentication and data safety. They are also required to report any data breaches publicly.

Pro tip: Ensure secure employee identity proofing during onboarding by using a combination of background checks, strong authentication that includes secure cryptographic protocols and biometric validation to comply with Know Your Employee (KYE) regulations.

US — FFIEC Standards

The Federal Financial Institutions Examination Council (FFIEC) is an interagency body that sets standards for all federally supervised financial institutions, including their subsidiaries. The FFIEC cybersecurity best practices includes guidance on effective authentication and access risk management practices. The FFIEC authentication standards emphasize multi-factor authentication (MFA) as a critical security control against financial loss and data compromise, similar to the PSD2 Strong Customer Authentication mandate.

It includes references to NIST standards SP 1800-17 and SP 800-63B, which provide implementation guidelines for passwordless MFA based on FIDO specifications. In August 2024, the FFIEC announced that it will sunset its Cybersecurity Assessment Tool on August 31, 2025, and asks financial insitutions to refer directly to relevant government resources, including the NIST Cybersecurity Framework 2.0 and the Cybersecurity and Infrastructure Security Agency’s (CISA) Cybersecurity Performance Goals. 

US — FTC Safeguards Rule

The FTC Safeguards Rule requires non-banking financial institutions, such as mortgage brokers, auto dealers, and payday lenders, to implement a comprehensive security program to keep their customers’ information safe. The FTC Safeguards Rule had several new provisions that went into effect in 2023. Among the new statutes is a mandate for multi-factor authentication for anyone accessing customer information. It should be noted that this includes MFA for desktop and server access, not just applications.

US — NIST Cybersecurity Framework 2.0

The NIST Cybersecurity Framework (NIST CSF) was originally designed as a guide for businesses of all industries and sizes to manage cybersecurity risk. The newest version, the CSF 2.0, addresses the evolution of technology towards cloud migration and SaaS by adding the function of governance and a set of searchable resources for security leaders to use to make the best decisions regarding their cybersecurity

This framework is particularly relevant for financial organizations who rely heavily on SaaS technology and cloud solutions and accounts and have a vast amount of sensitive data and information that they must protect from data breaches, cyberattacks and operational failures.  

Pro tip: Implement continuous authentication to validate user identity in real-time, ensuring security throughout the entire session. This type of adaptive authentication defends against risks related to stolen credentials and unauthorized access. 

US — Executive Order on Critical Infrastructure Cybersecurity

Enacted in 2013, the Executive Order on Critical Infrastructure Cybersecurity 13636 requires federal agencies to work together with the private sector to strengthen security in critical sectors such as water, electricity and healthcare. During the global coronavirus the financial services sector was officially classified as a critical sector as it was considered essential to maintaining the nation’s economic stability. 

Organizations are encouraged to use the NIST CSF framework to align their cybersecurity risk with a strategic plan of defense. This includes information sharing, developing incident response and recovery plans, and strengthening cybersecurity resilience through measures such as MFA and threat detection. 

The mandates for 2024 and 2025 include requiring each sector to have a specific cybersecurity plan tailored to their risk and improved intelligence and threat sharing. In addition, it tasks different federal agencies with being responsible for different critical infrastructure (e.g. the Department of Energy is responsible for the security of the U.S’s energy sector). It also requires the federal government to adopt minimum security requirements and a risk-based approach to critical infrastructure.

California — California Consumer Privacy Act (CCPA)

Introduced to help protect the privacy rights and consumer protections of Californians, the CCPA affects any company which does business with Californians and meets one of the following: 

Has a gross revenue of over $25 million Buys, sells or receives personal data on 50,000 consumers Makes over half its revenue from selling consumers' personal information 

The fines can be up to $2,500 for unintentional violations and $7,500 for intentional violations, which will be multiplied per record stolen in the case of a data breach.

EU — Payment Services Directive 2 (PSD2)

The PSD2 requirement was introduced to make it easier for financial services companies to integrate and securely share data while making payment systems safer. In addition, the law set specific technical standards for strong customer authentication and improving security measures. 

The measures affect all companies catering to consumers in the EU and any payments that start, travel through or end in the EU. This puts clear obligations on financial services cybersecurity, even for firms outside the EU.

An updated version of the framework, PSD3, is currently in review. PSD3 will introduce significant changes for banks and non-bank payment service providers (PSPs), as well as consumers. The changes include new Strong Customer Authentication (SCA) regulations, with stricter rules around data access, payment protection, and authentication of users. The final version is expected to be published late 2024 and be enforceable in 2026.

EU — NIS2 Directive

NIS2, or the Network and Information Security Directive 2, is an updated regulation from the European Union designed to strengthen cybersecurity across multiple industries. It will become law on October 17, 2024. NIS2 expands on the original NIS Directive by widening its scope and imposing stricter rules on security practices and incident reporting, with stiffer penalties for non-compliance.

Under NIS2, entities in sectors like energy, finance, transport, healthcare and manufacturing must implement strong cybersecurity protocols. These include effective risk management, strong authentication and access protocols, real-time threat monitoring, and rigorous incident reporting standards.

Importantly, the directive specifies the use of multi-factor authentication (MFA) and continuous authentication to protect network and information systems (Article 21 2(j)). NIS2 impacts not only major financial institutions, but also smaller financial entities, payment services, and digital wallets.

HYPR saves customers millions of dollars, with a 324% ROI. Read the Forrester report.

EU — Digital Operational Resilience Act (DORA) 

In response to increasing numbers of cybersecurity attacks and operational disruption after the financial crisis of 2018, the Digital Operational Resilience Act (DORA) is targeted towards increasing the resilience of the financial sector for businesses in the European Union and those dealing with EU-based customers.

It includes authentication and access control requirements for Information and Communication Technology (ICT) systems, which the financial industry in particular is increasingly relying on for the outsourcing of services that deal with sensitive data. DORA is aimed at helping to defend against the unauthorized access of malicious actors to this sensitive data that could lead to data breaches, security incidents, and operational disruptions.

EU — General Data Protection Regulation (GDPR)

All companies processing the data of European Union citizens are affected by the GDPR. The law determines how data is used and protected and governs how consent must be used for collecting it. Along with data usage, timely reporting of breaches is also obliged if it affects EU citizens.

For financial services cybersecurity, adhering to GDPR is essential. Failure to do so can lead to fines of $20 million or 4% of global revenue, with Amazon receiving the biggest fine so far of $888 million.

UK — Data Protection Act

After the UK left the EU, it kept the GDPR which it passed into law as the Data Protection Act (2018). It is roughly the same as the EU-GDPR (just amended for UK citizens) but still carries the same requirements around data safety, consent and reporting, and fines for non-compliance.

Global - Payment Card Industry Data Security Standard (PCI DSS)

The PCI DSS covers the processors of payments from major credit and debit card companies. To achieve compliance, financial services cybersecurity programs must meet several obligations, such as protecting cardholder data, encrypting data in storage and transmission, and authenticating access to all system components. Breaches of the PCI DSS may result in fines and restrictions in using major credit cards.

The latest version of PCI DSS 4.0 requires strong authentication requirements specifically related to passwords and MFA. Passwords now have stricter specifications(e.g., resetting them every 90 days) and MFA requirements have extended beyond administrators accessing the cardholder data environment (CDE) to all types of system components, including cloud, hosted systems, on-premises applications, network security devices, workstations, servers and endpoints.

Pro tip: Ensure compliance with standard 8.3.3 by using automated, high-assurance identity verification methods when resetting user credentials / authentication factors. This standard requires user identity verification before modifying authentication to prevent attacks that target this reset process.

Singapore — Monetary Authority of Singapore Notices on Cyber Hygiene

The Monetary Authority of Singapore (MAS) regulates financial institutions in the banking, capital markets, insurance and payments sectors. The MAS has issued a collection of notices on cyber hygiene, which are a set of legally binding requirements that financial institutions must take to mitigate the growing risk of cyberthreats.

The cyber hygiene notices cover six key areas, which include securing administrative account access, regular vulnerability patching and mitigation controls for systems that cannot be patched, written and regularly tested security standards, perimeter defense systems, malware protection and multi-factor authentication for any system used to access critical information.

Other — Various U.S. State Biometric Laws

Multiple U.S. states have biometric privacy laws — such as the Illinois Biometric Information Privacy Act (BIPA) — that affect any company doing business with a resident of that state. These laws regulate collection and storage of biometric information, such as face scans, fingerprints, or voiceprints. The statutes point out that biometric identifiers are different from other types of sensitive information as they are biologically unique to the individual, and cannot be changed once compromised.

Consequences of Non-Compliance with Financial Cybersecurity Regulations

When businesses fail to comply with these financial cybersecurity regulations, they are subject to monetary penalties, increased regulatory scrutiny, and a higher risk of cybersecurity incidents. For example, the fines for NYDFS non-compliance can be $250,000 a day for ongoing non-compliance. These penalties and security incidents due to non-compliance also affect customer trust and the value of the brand. In 2022, Uber’s stock went down by 5% after its third data breach in three months. 

Along with operational disruption and a loss in revenue, cybersecurity incidents may result in legal action months or even years after the incident, as in the case with the class action suit against CDK consumers from the MOVEit data breach

Achieve Regulatory Compliance with Identity Assurance

The financial services sector is at high risk of cyberattacks due to the value of successful data breaches or account takeover attacks. To combat this, state, national and supranational governments and industry groups have introduced several financial services cybersecurity regulations to ensure best practice is deployed throughout the industry. 

A common thread throughout much of the financial services cybersecurity regulations worldwide is the protection of data and stronger identity security systems. Financial services organizations globally, including two of the top four banks, rely on HYPR  to secure their systems and achieve regulatory compliance.

HYPR combines FIDO2 passwordless MFA, continuous adaptive risk response and automated identity verification to secure finance organizations while improving user experience. Learn more about HYPR’s security certifications and how our identity assurance platform helps you comply with financial cybersecurity regulations worldwide.

Key Takeaways:

Updates To Cybersecurity Regulations: Regulations are becoming more stringent across various frameworks, requiring frequent audits, vulnerability scans, and comprehensive asset inventories to improve cybersecurity and compliance. A Global Focus on Financial Cybersecurity: Regulations like GDPR, PSD2, PCI DSS 4.0, and the new EU DORA focus on data protection, strong authentication and cyber resilience. Consequences of Non-Compliance: Non-compliance with financial cybersecurity regulations can result in severe monetary penalties, reputational damage and legal action.

Datarella

Our Data Authenticity Chain

This is the third article in a series of technical posts about how Track & Trust works at a component level. The world today is full of fake news and […] The post Our Data Authenticity Chain appeared first on DATARELLA.

This is the third article in a series of technical posts about how Track & Trust works at a component level. The world today is full of fake news and dubious “facts.” Consequently, we face a significant challenge in verifying the accuracy of the data we receive. Moreover, a major part of this challenge is identifying the source of this data. We can’t predict who the end users of the Track & Trust system will be or exactly what they will want to communicate, which makes this task even more difficult. To address this issue, we must ensure that data entering our system are valid. This post explores how the “Trust” part of Track & Trust works. It explains exactly how we maintain the chain of data authenticity.

Quick navigation links to the follow-up articles will be provided at the bottom of each article once the series is complete. For now, let’s jump in.

Establishing a foundation for the data authenticity chain

We designed our system to accommodate key requirements that establish a foundation for data authenticity. Specifically, our goal was to create a flexible system. This system can work with any logistics company, regardless of their internal processes. Notably, we achieved this flexibility, which is a key benefit of Track and Trust. This allows us to collaborate with a wide range of partners. Furthermore, logistics companies can increase the number of data points they receive about their shipments from the field by using Track & Trust.

This, in turn, enables them to achieve probabilistic 360° supply chain tracking. Our team structured the Track & Trust data to integrate easily into any logistics database. In particular, we use a series of linked cryptographic signatures and blockchain transactions to create this data authenticity chain. Finally, this chain of custody has a specific purpose. It ensures that we can authenticate and validate offline events once they reach our servers.

How does the data authenticity chain work?

TLDR: We leverage APIs to take inputs from our customers (Logistics Firms) as well as to give them valuable probabilistic 360° supply chain tracking data back. For demonstration purposes we have built a front-end website to make the system tangible but the magic happens via our swagger API.

The processes surrounding our data authenticity chain are pretty technical. To make it easier to understand we’ve formated the workflow into a sequence diagram that anyone can understand.

In summary, our data authenticity chain is simply a way of validating, recording and making messy data from the field trustworthy. Once that’s accomplished leverage our blockchain toolkit to make those data immutable and highly tamper resistant. It’s a chain of custody for that data that includes built-in proof of origin. This, in turn, enables traceability and trust beyond the current state of the art.

Our next post will cover all of the ways that we can view this information. We’ll also be covering the orchestration systems operating in the background that enable us to do over the air updates to the hardware.  There will be dashboards, monitoring and CI/CD galore for your perusal.

<< Previous Post

Next Post >>

The post Our Data Authenticity Chain appeared first on DATARELLA.


KuppingerCole

Guardians Under Pressure: Mental Health in the World of Cybersecurity

by Warwick Ashford In today’s hyper-connected world, cybersecurity professionals protect organizations from increasingly complex threats. While essential for safeguarding data and digital infrastructures, this work often takes a mental toll. Pressures arise from regulatory demands, business expectations, law enforcement interactions, cybercriminals, and IT complexity. Regulatory Pressures and C

by Warwick Ashford

In today’s hyper-connected world, cybersecurity professionals protect organizations from increasingly complex threats. While essential for safeguarding data and digital infrastructures, this work often takes a mental toll. Pressures arise from regulatory demands, business expectations, law enforcement interactions, cybercriminals, and IT complexity.

Regulatory Pressures and Compliance

Compliance with regulations like GDPR, HIPAA, and PCI DSS requires constant monitoring and attention to detail. The consequences of non-compliance heighten anxiety for professionals responsible for ensuring strict adherence.

Business Demands and Pace of Work

Cybersecurity teams face constant pressure as businesses drive digital transformation. Balancing business goals with preventing vulnerabilities leads to exhaustion. The demand to "do more with less" and justify security investments adds stress, especially when prevention's value is hard to quantify.

Law Enforcement and Criminal Activity

Collaborating with law enforcement and combating cybercriminals, including organized crime and state actors, brings additional stress. Investigating breaches and countering these threats can take a psychological toll.

Technological Complexity and Uncertainty

The fast-evolving tech landscape requires continuous learning. The unpredictability of threats and managing complex systems lead to burnout and self-doubt, increasing pressure to stay ahead of attackers.

Day-to-Day Cybersecurity Operations

Cybersecurity professionals also manage daily tasks like network monitoring and incident response. The constant vigilance and high task volume often lead to cognitive overload, disrupting work-life balance and causing fatigue.

A Call to Address Mental Health

The mental health challenges facing cybersecurity professionals are significant. Organizations must address these challenges and provide support. This important issue will be discussed at KuppingerCole’s Cyberevolution 2024 conference in Frankfurt, Germany, from 3–5 December.

Addressing mental health is key to fostering a resilient workforce. Recognizing this helps protect both digital infrastructures and the professionals who defend them. Providing realistic workloads, work-life balance, and destigmatizing mental health is essential for a sustainable workforce.

At cyberevolution 2024, speakers on this topic include Sarb Sembhi, CTO at Virtually Informed; Jasmine Eskenzi, Co-Founder & CEO of The Zensory; Inge van der Beijl, Director Innovation at Northwave Investigation and Innovation; and Hermann Huber, CISO at Hubert Burda Media.

They will be addressing topics such as Cyber mindfulness: Harnessing mindfulness to combat social engineering attacks and empower the cyber workforce of the future, Cybersecurity and mental health: Navigating crisis Impact, and Stress, burnout and declining motivation in cybersecurity! There will also be a panel discussion on Addressing mental health Challenges in cybersecurity

Sunday, 13. October 2024

KuppingerCole

Going Beyond Identity: A Deep Dive into Zero Trust Security

Matthias and Alejandro discuss the concept of Zero Trust, emphasizing its importance in modern cybersecurity. They explore the core principles of Zero Trust, including continuous monitoring, data protection, and the common misconceptions surrounding it. The discussion highlights the significance of automation and orchestration in enhancing security measures and provides real-world examples of succ

Matthias and Alejandro discuss the concept of Zero Trust, emphasizing its importance in modern cybersecurity. They explore the core principles of Zero Trust, including continuous monitoring, data protection, and the common misconceptions surrounding it. The discussion highlights the significance of automation and orchestration in enhancing security measures and provides real-world examples of successful Zero Trust implementations. The conversation concludes with insights into future trends and the evolving nature of cybersecurity threats.



Friday, 11. October 2024

TBD on Dev.to

Known Customer Credential Hackathon

tbDEX is an open messaging protocol that enables liquidity seekers to connect with liquidity providers. This means that as a liquidity provider, your business can be the backend supplier in several payment applications. Performing KYC on repeat customers every time they attempt to transact with you from a different payment app would be a pain. To avoid this, you will use the Web5 SDK to is

tbDEX is an open messaging protocol that enables liquidity seekers to connect with liquidity providers. This means that as a liquidity provider, your business can be the backend supplier in several payment applications.

Performing KYC on repeat customers every time they attempt to transact with you from a different payment app would be a pain. To avoid this, you will use the Web5 SDK to issue a Known Customer Credential (KCC) to a customer, Alice, who you have already completed KYC on. You will store the JWT representing the KCC in Alice’s Decentralized Web Node so that she can present it to your business from any payment app.

Challenge Create a Decentralized Identifier (DID) and DWN to use as the Issuer. Bonus: Use the DIF community DWN instance hosted by Google Cloud.

Issue Alice a KCC that includes evidence. Note that for this challenge, you do not need to implement an actual identity verification flow.

Install the VC Protocol onto your DWN so that you can communicate with Alice’s DWN.

Obtain permission to write to Alice’s DWN by sending a GET request to:

https://vc-to-dwn.tbddev.org/authorize?issuerDid=${issuerDidUri} Store the VC JWT of the KCC as a private record in Alice’s DWN. Submit

To enter a submission for this hackathon, provide the DWN Record ID of the KCC.

Resources Alice’s DID: did:dht:rr1w5z9hdjtt76e6zmqmyyxc5cfnwjype6prz45m6z1qsbm8yjao web5/credentials SDK web5/api SDK How to create a DID and DWN with Web5.connect() Obtain Bearer DID - required to sign KCC Known Customer Credential Schema How to issue a VC with Web5 Example of issuing a KCC with Web5 Example of issued KCC How to install a DWN Protocol How to store a VC in a DWN Contact Us

If you have any questions or need any help, please reach out to us in our #kcc-hackathon channel on Discord.


Spruce Systems

Fighting Election Deepfakes with Digital Identity

Discover how digital signatures can ensure the authenticity of online announcements, helping to restore trust in a world where misinformation thrives.

One of the biggest pieces of news of the 2024 U.S. Presidential election has been the July 20th announcement by President Joe Biden, made via a letter that many saw first on social media, that he was withdrawing from the race. The immediate reaction was skepticism and disbelief – an understandable reaction in an era when it seems like more and more of what we see on the internet is fake, false, or misleading. 

The fallout of this skepticism was luckily limited. However, misinformation can have major impacts on people’s behavior, and the broader mistrust it sows can be deeply toxic for an entire society. Current attempts to deal with the problem, such as by fact-checking organizations, can’t keep up, especially as generative AI makes fakes much easier to produce.

It’s time for a different way to authenticate content online, and luckily, there’s one not too far over the horizon: digital signatures based on privacy-preserving cryptography can be used to prove the real source of online content. States, including California, are testing out a state-issued digital ID, known as the mobile driver’s license (mDL), based on these digital signatures. 

Particularly for important announcements from trusted sources, trustworthy digital signatures could have a huge positive impact on the information environment, and ultimately could help rebuild the trust that has been eroded by the online free-for-all of the past decade.

Let’s explore how that could work.

The Death of Drawn Signatures

President Biden’s withdrawal announcement was made, not in a network-televised speech, but via a letter on Biden’s letterhead. The letter was distributed to news outlets but also posted to social networks, including X (formerly known as Twitter), where many commentators saw it first. This cut out key sources of trust and vetting: the authenticity of a direct spoken statement and the third-party confirmation of a news organization.

It’s little surprise, then, when some speculated that Biden’s letter might not be real. After all, Twitter accounts can be hacked, and anyone might have created the letter. Notably, skeptics cast doubt specifically on Biden’s signature – the very tool humans have used to prove the authenticity of communications for centuries, even millennia. 

Those doubts left a gap for a fake video of Biden purportedly making the announcement. That’s just one example of the fake videos, audio, and photos we’re likely to see in the coming weeks and months, as partisans engage in boundary-breaking informational warfare. 

Disinformation has always been one of the dark arts of politics, but new generative AI tools make such fakery so easy that fact-checkers can never hope to keep up. In fact, AI and automation are also empowering “bots” on social media and across the internet, which can simulate real humans’ reactions to content, misleading some victims even more severely with false “social proof.” In one worrying recent example, Russian operatives have used AI to impersonate Americans supposedly opposed to military support for Ukraine.

With the internet increasingly the center of political discussion in America and around the world, and with the most powerful politicians in the world making major announcements via social media, we need a better way to separate the fake from the real.

The Unfakeable Proof of Digital Signatures

To understand how content could be reliably associated with a real-world identity, we have to touch on a somewhat difficult topic: cryptography.

The problem with verifying content online up to now is that the infrastructure of the internet has no built-in identity system, and any digital file can be copied. That’s why digital information systems “break” traditional forms of attestation – anyone can post any file, from any location, and claim to be anyone. Not only can you copy-paste a written signature onto any document, you can now fairly effectively fake video of someone making a statement. While dedicated digital sleuths can spot impostors in various ways, it’s very difficult for amateurs.

Reliably “signing” a digital message instead relies on encryption techniques that aren’t exactly new but are still unfamiliar—digital signatures and public-key cryptography. 

In very broad terms, online public information could be reliably signed using a digital certificate issued and affirmed by a known source – possibly a driver’s license issuer, but not exclusively, as we’ll see. That certificate would then be mathematically mixed with the digitized message content, or “hashed,” to produce a string of characters that can only be matched back to that specific content-signature pair. 

That hash file would be attached to a public post, and anyone who wanted to affirm its authenticity could check that this specific content was signed by a specific person’s certification. To draw a rather abstract metaphor, it’s like signing a document with ink that contains all the letters in the document itself – a signature unique to one piece of data.

This leaves out a lot of technical detail, but what matters is that this system can’t be spoofed or broken, except by extraordinary measures, such as physically stealing certificate-signing hardware from the DMV. In the case of our election example, the President could certify, using his mobile driver’s license or other verifiable digital ID, the content in his social media statement using a digital signature and the public would be able to trust it’s authenticity.

This type of digital signature has another advantage – you don’t actually have to reveal your identity to sign content. Digital ID systems, such as mobile driver’s licenses, have what are known as ‘selective disclosure’ features, meaning you can attest only to the specific information you want. That can include simply affirming that “a human produced this content,” without disclosing your name. Or you can show that it was made by “a human from Dallas,” without disclosing your address. 

This is important to emphasize because the idea of a digital identity can initially sound oppressive or authoritarian – and it certainly can be, if implemented using authoritarian ideals. But under the right regulatory and technology framework, they can be far more privacy-preserving than current models.

Most importantly, and in sharp contrast with the most dystopian fears, you won’t even have to depend on a government agency to attest to your identity.

This is a widely-shared vision of the digital identity future, one that aligns with the values of privacy, individual freedom, and democratic choice. At the same time, it offers a vast improvement in online trust over the current status quo. 

Over the next few weeks, Americans and many others will see yet again just how flawed our online discourse is. Being able to prove who’s talking, whether President or pauper, is an obvious starting point for fixing it.

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


KuppingerCole

Network Detection and Response (NDR)

by Osman Celik This report provides an overview of the Network Detection & Response (NDR) market and a compass to help you find a solution that best meets your needs. It examines solutions that provide an integrated set of security and compliance capabilities designed to protect cloud-native applications across the development and production lifecycle. It provides an assessment of the capabili

by Osman Celik

This report provides an overview of the Network Detection & Response (NDR) market and a compass to help you find a solution that best meets your needs. It examines solutions that provide an integrated set of security and compliance capabilities designed to protect cloud-native applications across the development and production lifecycle. It provides an assessment of the capabilities of these solutions to meet the needs of all organizations to monitor, assess, and manage these risks.

Network Detection and Response (NDR)

by Osman Celik Enhance network security with NDR solutions. Improve threat detection, compliance, and performance in complex infrastructures. Elevate your cybersecurity posture today by using this buyer's guide to help you find a solution that is right for you.

by Osman Celik

Enhance network security with NDR solutions. Improve threat detection, compliance, and performance in complex infrastructures. Elevate your cybersecurity posture today by using this buyer's guide to help you find a solution that is right for you.

Cloud Backup for AI Enabled Cyber Resilience

by Mike Small Discover how to achieve cyber resilience with robust data backup and recovery solutions, protecting against ransomware, IT failures, and regulatory challenges. Use our buyer's guide to help you find the solution that is right for you.

by Mike Small

Discover how to achieve cyber resilience with robust data backup and recovery solutions, protecting against ransomware, IT failures, and regulatory challenges. Use our buyer's guide to help you find the solution that is right for you.

Okta

How to Create a Secure CI/CD Pipeline Using Okta Terraform

Embarking on a DevOps journey can be exciting and daunting, especially for beginners. The landscape is vast, and the learning curve can feel steep. One of the most common challenges is setting up and managing a robust Continuous Integration/Continuous Deployment (CI/CD) pipeline that ensures seamless integration and delivery of code changes. This guide aims to simplify that process by walking you

Embarking on a DevOps journey can be exciting and daunting, especially for beginners. The landscape is vast, and the learning curve can feel steep. One of the most common challenges is setting up and managing a robust Continuous Integration/Continuous Deployment (CI/CD) pipeline that ensures seamless integration and delivery of code changes. This guide aims to simplify that process by walking you through setting up a CI/CD pipeline for Okta using Terraform, AWS, and GitHub Actions.

Overcoming DevOps challenges securely

Getting started with DevOps often presents a series of challenges:

Running Locally: Setting up Terraform locally involves dealing with packages, dependencies, and managing the state file, which can be cumbersome and error-prone. Collaboration: Ensuring team members can collaborate effectively requires a consistent and reproducible environment.

Making a setup production-ready introduces further complexities:

State File Storage: Knowing where and how to store the Terraform state file securely. Secrets Management: Safely storing and managing sensitive information like API keys and passwords. Automation: Automating the deployment process to ensure reliability and efficiency.

In this post, we’ll use Okta, Terraform, AWS, GitHub, and GitHub actions to create a secure CI/CD pipeline.

Table of Contents

Overcoming DevOps challenges securely CI/CD pipeline architecture using Terraform, AWS, Okta, and GitHub CI/CD workflow overview Store Terraform files in source control Connect to Okta securely using OAuth 2.0 Leveraging AWS for Terraform Backend and Secrets Management Store Terraform backend components in AWS Manage secrets securely Set up the IAM policy for the CI/CD pipeline Configure an OpenID Connect Provider in GitHub Create IAM roles for the CI/CD pipeline Use GitHub Actions to trigger Terraform commands Leverage GitHub Actions for the CI/CD workflow Organize the CI/CD and Terraform code files for maintainability Build the CI/CD pipeline using Terraform and Okta Set up source control branches for Terraform code files Finalize Terraform configuration Connect Terraform code to Okta resources GitHub Actions triggers Terraform dev build GitHub Actions trigger Terraform prod plan GitHub Actions trigger Terraform prod build Learn more about Okta, Terraform, CI/CD patterns, and OAuth 2.0

By the end of this post, you’ll have a solid understanding of how to set up a CI/CD pipeline tailored for Okta and the knowledge to start implementing infrastructure as code with Terraform.

Let’s dive in and take the first step towards mastering DevOps with a practical, hands-on approach!

Prerequisites

You’ll need the following tools installed on your local machine. Follow the installation instructions through the provided links.

IDE with a Terraform plugin, such as Visual Studio Code or IntelliJ IDEA

Choosing the proper Integrated Development Environment (IDE) with a Terraform plugin is crucial for an efficient and error-free workflow. Some essential features to look for in your IDE:

Variable Declaration Warnings: If your Terraform module requires certain variables, the IDE will alert you when any required variables are not declared. Resource Declaration Assistance: When you declare a resource, the IDE will warn you if any required attributes are missing and suggest attributes to add. Resource and Attribute Autocompletion: The IDE will autocomplete resource names and attributes when referencing other resources, saving time and reducing errors.
Git Terminal window

You’ll need the following accounts:

Okta Workforce Identity Cloud Developer Edition account GitHub account and a GitHub organization account (You can create a free GitHub organization if you don’t have access to one) A free AWS account CI/CD pipeline architecture using Terraform, AWS, Okta, and GitHub

It is essential to understand the key components and their roles in the CI/CD process. This integration of GitHub, Terraform, AWS, and Okta allows for secure and efficient infrastructure management and deployment. The following overview details each component and its function.

User

Develop Code: Develops Terraform code on their local machine using a preferred IDE. Uses Git to push code to the GitHub repository.

GitHub Repository

Code Storage: Stores the Terraform configuration code. Triggers Workflow: GitHub Actions checks out code that automates builds using Terraform based on events within the GitHub repository (e.g., push to branches, pull requests, etc.).

GitHub Actions

Workflows: Workflows are automatically triggered by GitHub repository events and execute the necessary commands to integrate with AWS and Terraform. AWS: Assume Role: Integrates with AWS IAM STS via GitHub OIDC IdP to authenticate and assume roles with web identity. Temporary Credentials: Utilizes temporary credentials returned from AWS IAM STS for Terraform backend operations. Terraform: Runs Terraform commands to manage infrastructure.

Terraform

State Management: S3: Utilizes S3 for storing Terraform state files. DynamoDB: Uses DynamoDB for state locking to ensure consistency and prevent concurrent operations. Secrets Management: Retrieves Okta OAuth2 client credentials private key from AWS Secrets Manager for authentication and authorization to Okta management APIs. Okta: Resource Management: Leverages Okta APIs via the Terraform Okta provider to manage resources. CI/CD workflow overview

At a high level, this is what we aim to build out through this article. We’ll set up a CI/CD pipeline that automates infrastructure deployment using GitHub, Terraform, AWS, and Okta. Here’s a simplified overview of the workflow:

Branch Creation: Developers create and work on a develop branch. Push to Develop: Code changes are committed locally and pushed to the remote develop branch. Dev Build: GitHub Actions run Terraform commands to deploy to the development environment. The push to develop automatically triggers this. Pull Request to Main: A pull request is made from develop to main for code review. Any GitHub Action workflow executions are included in the pull request for review. Prod Plan: GitHub Actions preview changes for the production environment. This is triggered automatically by the pull request to main, and it lets pull request reviewers validate potential changes before modifying the production environment. Merge to Main: The pull request is approved and merged into the main branch. Prod Build: GitHub Actions runs Terraform commands to deploy to the production environment. The merge to main automatically triggers this. Store Terraform files in source control

We’ll use GitHub as our code repository and GitHub Actions for our CI/CD workflows, so you’ll need a GitHub account. If you don’t have one, create one at GitHub.

You will also need a GitHub Organization. If you are an enterprise user, you likely already have one. If not, or if you’re experimenting, you can create one for free by following the GitHub Organizations instructions to start creating an Organization.

You’ll create a new repository within your GitHub Organization and then connect it to your local development environment:

Create a new repository: We created a templated repository for you to use for this guide. Follow the Creating a repository from a template instruction from GitHub and use this sample template. Select your GitHub Organization as the owner and name the repository using a structure such as {okta-domain-name}-okta-terraform (e.g., atko-okta-terraform). Ensure you set the repository to Private. This setting is crucial as the repository will run GitHub Actions workflows and have information related to your environment (e.g., AWS resource names). Clone the Repository: Once you create your repository, copy the clone link and run the following commands in the command line. Replace the variables with your GitHub username, GitHub organization, and repository name: git clone https://{your_github_username}@github.com/{your-github-organization}/{your-repository-name}.git cd {your-repository-name} Connect to Okta securely using OAuth 2.0

We will use the OAuth 2.0 client credentials flow to access Okta APIs. OAuth 2.0 is the most secure method for integrating with Okta APIs, as we can tightly bound authorizations using scopes, and access tokens are short-lived compared to the long-lived SSWS API keys. Furthermore, Okta’s Terraform provider supports OAuth 2.0 Demonstrating Proof-of-Possession (DPoP), which is an additional security mechanism to bind access tokens to a particular client through cryptography, thereby reducing the risk of token replay by a malicious actor.

The Okta OAuth client requires ‘scopes’ to interact with the management API. For this guide, we will interact with the Groups resource in Terraform and corresponding APIs. To understand the corresponding scopes related to a Terraform resource and underlying Management APIs, refer to the Okta API documentation.

Finally, the OAuth client requires an Administrator Role to make administrative changes. We will assign the Organization Administrator role as this contains sufficient permissions for the resources we manage within this build. If you intend to use Terraform to manage your environment ongoing, a Super Administrator may be required (especially for managing resources like Admin Roles). The effective permissions are a combination of the scopes permitted for the client and the Administrator Role - so even though we provide the client ‘Organization Administrator,’ if we only give access to ‘groups’ related scopes, all the client can do via the API is manage groups!

Follow these steps to set up an API Services application in Okta. Navigate to the Okta Admin Console and follow the steps to create the API services application:

Navigate to Applications > Applications and press the button to Create App Integration Select API Services and press Next Name your application (e.g., Terraform) Press Save

In the General Settings tab, find the Client Credentials section and press Edit to make the following changes:

Change the Client authentication method to Public key / Private key. In the Public Keys section, click Add key and then Generate new key. Select the PEM tab and copy the contents to a file you’ll use later. Select Done and Save

Navigate to Okta API Scopes tab and make the changes:

Find okta.groups.manage and select Grant

Navigate to the Admin roles tab and press Edit assignments. Then apply the following changes:

In the Role drop-down, select ‘Organization Administrator’, or your preferred Admin Role Select Save Changes to finish assigning the role

Repeat these steps to create an API Service Okta application and configure it for any additional environments you manage.

⚠️ Important

Do not save the private key locally. In the next steps, we will securely onboard it to secrets management.

Leveraging AWS for Terraform Backend and Secrets Management

We will utilize AWS for both the Terraform backend and Secrets Management. The Terraform backend will store state files, which track the status of your Okta environment based on previous builds. We will use the GitHub OIDC integration with AWS for Terraform authentication. This allows GitHub to authenticate with AWS using OpenID Connect (OIDC) and assume the necessary role via web identity to interact with required services. This approach eliminates the need for long-lived or persistent secrets (such as AWS access keys and secrets), ensuring a more secure setup.

Store Terraform backend components in AWS

First, let’s create the necessary components for the Terraform backend.

Create an S3 Bucket

Follow the Creating a bucket instructions from AWS to create a bucket. Name the bucket using a structure such as {okta-domain-name}-okta-terraform-state. By default, Block all public access is enabled, which ensures that your bucket contents are private, which is an integral control given that the bucket will contain information about your Okta configuration. I highly recommend enabling Bucket Versioning to version your state files. This is a valuable feature should you need to roll back to previous versions of the state. After you have created the bucket, follow the Viewing the properties for an S3 bucket instructions to navigate to the properties of the bucket and capture the ARN. The ARN will be used later to define the AWS IAM Role Policy. Lastly, we will use folders to organize your different environments’ state files. Follow the Organizing objects in the Amazon S3 console by using folders instructions to create a folder for each environment you manage (e.g. dev and prod).

Create a DynamoDB Table for State Locking

Follow the Create a table in DynamoDB instructions to create a DynamoDB table. Name the table using a structure such as {okta-domain-name}-okta-terraform-{environment} (e.g. atko-okta-terraform-dev). Set the partition key to ‘LockID’ and leave other configuration defaults. Note the table name, we will be using it later in the AWS IAM Role Policy definition. Repeat for any other environments you manage.

For more information on the AWS S3 Terraform backend, please refer to Terraform S3 Backend Documentation.

Manage secrets securely

Next, we will set up AWS Secrets Manager to securely store the private key for authentication and authorization to Okta management APIs.

Follow the Create an AWS Secrets Manager secret instructions to store the OAuth 2.0 private key(s). When configuring the secret, note this is of the secret type Other type of secret, and Plaintext. Ensure you name the secret something meaningful, as this will be referenced in your Terraform configurations, as well as AWS IAM Role Policy definition - follow a structure such as {environment}/okta-terraform-key (e.g., dev/okta-terraform-key). Since it’s a private key, keep any rotation-related configurations as default options. Once the secret has been created, copy the ARN for later use within the AWS IAM Role Policy definition. Repeat for any additional environments you manage.

Set up the IAM policy for the CI/CD pipeline

Next, we’ll create the IAM Policy definition. This policy will be used by the role that GitHub will assume via OpenID Connect (OIDC).

First, we will prepare the IAM policy JSON file. Use the following template and make the necessary replacements using the ARNs you’ve captured from the previous steps.

Replace <S3-ARN> with the ARN of your S3 bucket. This grants permission to list the bucket. You can find it under the Properties tab of the S3 Bucket. Example: arn:aws:s3:::akto-okta-terraform Replace <S3-ARN>/* with the ARN of your S3 bucket and any folder structures for respective environments. This grants permission to get and update objects in the relevant path. Alternatively, you can use a wildcard (*) for the entire bucket. Example: arn:aws:s3:::akto-okta-terraform/dev/* Replace <AWS-Region>, <Account-Number>, and <DynamoDB-Table-Name> with the AWS Region, AWS Account Number (found in the management console toolbar) and DynamoDB Table Name respectively. This grants permission to add and remove rows in the table for the Terraform state file locking process. Include any additional tables for each environment. Example: arn:aws:dynamodb:ap-southeast-2:99123456789:table/akto-okta-terraform-dev Replace <SecretsManager-ARN> with the ARN of your Secrets Manager secret. This grants permission to retrieve the secret value. Include any additional ARNs for each environment. Example: arn:aws:secretsmanager:ap-southeast-2:99123456789:secret:dev/akto_okta_terraform_key-QuqiGR { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "s3:ListBucket", "Resource": "<S3-ARN>" }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject" ], "Resource": [ "<S3-ARN>/*" ] }, { "Effect": "Allow", "Action": [ "dynamodb:DescribeTable", "dynamodb:GetItem", "dynamodb:PutItem", "dynamodb:DeleteItem" ], "Resource": [ "arn:aws:dynamodb:<AWS-Region>:<Account-Number>:table/<DynamoDB-Table-Name>" ] }, { "Effect": "Allow", "Action": [ "secretsmanager:ListSecrets", "secretsmanager:GetSecretValue" ], "Resource": [ "<SecretsManager-ARN>" ] } ] }

Follow the Create IAM policies documentation for instructions on creating an IAM Policy. When creating the policy document, use the JSON editor and input the JSON from the previous step. Name the policy something meaningful (e.g. ‘Okta_Terraform_Backend’).

By following these steps, you will have created an IAM policy that provides the necessary permissions for Terraform to interact securely with AWS services.

Configure an OpenID Connect Provider in GitHub

Next, we’ll configure the OIDC Identity Provider for GitHub. Follow the AWS instructions at Create an OpenID Connect identity provider in IAM.

For the Provider URL, use https://token.actions.githubusercontent.com For the Audience, use sts.amazonaws.com

For more information on integrating GitHub with AWS using OIDC, refer to the GitHub and AWS integration documentation.

Create IAM roles for the CI/CD pipeline

Finally, we’ll create an IAM Role for the GitHub OIDC Identity Provider to assume. This role will link the OIDC Identity Provider via the trusted entity and the policy via permissions.

Follow the instructions for Creating a role for OIDC from AWS. When configuring the Trusted Entity, choose Web Identity, and use the following values for the configurations:

Identity provider: token.actions.githubusercontent.com Audience: sts.amazonaws.com GitHub organization: {your_github_organization} (the unique identifier for your GitHub Organization) GitHub repository: {your_github_repository} (the name of your GitHub repository)

For permissions, choose the IAM Policy (‘Okta_Terraform_Backend’, or your name of choosing) you created earlier. Name the role something meaningful (e.g. ‘GitHub_Okta_Terraform_Backend’). Once the role has been created, copy the Role ARN. This is the only variable we need to pass to our pipeline to initialize the backend and retrieve the secret to authenticate and authorize Okta APIs — and it’s not even a secret!

By following these steps, you will have created an IAM Role that GitHub can assume via OIDC, enabling secure interactions with AWS and Okta.

Use GitHub Actions to trigger Terraform commands

GitHub Actions allows us to run our build and deployment activities using Terraform commands executed in a temporary virtual machine.

First, we must store the Role ARN and other environment variables in GitHub. To create and store variables for the GitHub repository, follow the Creating configuration variables for a repository instructions.

Store the Role ARN: Create a variable named AWS_ROLE_ARN and use the Role ARN for the value (e.g. arn:aws:iam::<Account-Number>:role/<Role-Name>). Store the Region: Create a variable named AWS_REGION and use the Region in which the AWS resources were created (e.g. ap-southeast-2). Refer to the following documentation for more details on Region names: AWS Regions Documentation

Ensure you do this at a ‘Repository’ level and not at an ‘Organization’ level, or the GitHub Actions workflows will not be able to read the variables

Leverage GitHub Actions for the CI/CD workflow

We will use multiple pre-built GitHub Actions to authenticate to AWS and run our Terraform commands. No action is required from you to configure these workflows. At a high level, the configured GitHub Actions workflows will perform the following:

GitHub Actions Runner: This action checks out your repository onto the runner, allowing you to run Terraform commands against your code. AWS Configure AWS Credentials: This action establishes an AWS session using the GitHub OIDC Identity Provider (IdP) and the Assume Role with Web Identity capability. There is no need to manage any secrets or custom scripts, as this action will handle session establishment. Terraform CLI: This action runs the Terraform commands.

For more information and to examine the code, see the github/workflows folder within the repository.

Organize the CI/CD and Terraform code files for maintainability

The high-level structure of the repository looks like this:

github/ ├─ workflows/ │ ├─ push-main.yml │ ├─ push-develop.yml │ ├─ pr-main.yml terraform/ ├─ modules/ │ ├─ {module}/ │ │ ├─ {resource}.tf │ │ ├─ variables.tf ├─ main.tf ├─ variables.tf ├─ backend-dev.conf ├─ backend-prod.conf ├─ vars-dev.tfvars ├─ vars-prod.tfvars Review the GitHub Workflows directory github/workflows/: This directory contains the GitHub Actions workflow files that define the CI/CD pipeline. push-main.yml: Workflow triggered by a push to the main branch. push-develop.yml: Workflow triggered by a push to the develop branch. pr-main.yml: Workflow triggered by a pull request to the main branch. Review the Terraform configuration files terraform/: The root directory for all Terraform configuration files. modules/: This directory contains reusable Terraform modules. {module}/: Each module has its own directory. {resource}.tf: The Terraform configuration file for specific resources within the module. variables.tf: The child module input variables definition file main.tf: The main Terraform configuration file where all providers, modules, and variables are configured. variables.tf: The parent module input variables definition file. backend-dev.conf: Configuration for the backend components for the development environment. This configuration must be passed in via CLI since named variables cannot be used directly in the backend block. backend-prod.conf: This is the configuration for the backend components in the production environment, similar to the development configuration. vars-dev.tfvars: Input variable values specific to the development environment. vars-prod.tfvars: Input variable values specific to the production environment. Build the CI/CD pipeline using Terraform and Okta

Now that we have everything set up, let’s actually build something!

First, we will need to update a few files with some of the necessary configurations relevant to your environment. Then we will create a new group in your Okta environment, using variables to declare the group name.

Set up source control branches for Terraform code files

Ensure your local repository is up-to-date with the remote main branch.

git checkout main git pull origin main

Create and switch to the branch named develop.

git checkout -b develop Finalize Terraform configuration

Now that we have checked out our code let’s finalize the configurations required for Terraform to interact with our backend, retrieve the necessary secrets, and interact with the Okta Management APIs. Open the repository in your preferred IDE to edit some files.

Backend configuration files

The Terraform backend configuration is stored within the backend-*.conf files and contain configurations relevant to your environments. Within these files, you will find placeholders for the following:

bucket - the name of your bucket (not the ARN!) key - the path to your Terraform state file (i.e. the folder and resultant file name, which defaults to terraform.tfstate) dynamodb_table - the name of your DynamoDB table (not the ARN!) region - the AWS Region

Replace all the placeholders in the backend-*.conf files. There are two placeholders for development and production environments, respectively. Refer to the following example as a reference:

bucket = "atko-okta-terraform" key = "dev/terraform.tfstate" dynamodb_table = "atko-okta-terraform-dev" region = "ap-southeast-2" Terraform variables (tfvars)

Variables are a critical component within the infrastructure as code configurations allow you to have a single set of configurations while maintaining environment-specific values. Within Terraform, one way to manage such environment-specific values is using ‘tfvars’ files. The ‘tfvars’ file contains a set of variable values specific to an environment. It is passed in via the Terraform CLI in our GitHub Actions workflow when running specific parts of the workflow.

Additional configuration-related variables stored within the vars-*.tfvars files require updates. Within these files, you’ll find placeholders for the following:

region - the AWS Region okta_org_name - the prefix value for your Okta tenant okta_base_url - the base or suffix value for your Okta tenant okta_scopes - the scopes for the Terraform Okta OAuth 2.0 client application okta_client_id - the client ID for the Terraform Okta OAuth 2.0 client application okta_private_key_id - the private key ID for the Terraform Okta OAuth 2.0 client application. This is the ‘KID’ value, which can be obtained in the ‘Public Keys’ section of the OAuth 2.0 application configuration okta_secret_id - the AWS Secrets Manager ‘secret name’ for the private key of the Terraform Okta OAuth 2.0 client application. This is the ‘Secret name’ value, not the ‘Secret ARN’.

Replace all the placeholders in the vars-*.tfvars files. Refer to the following example as a reference:

region = "ap-southeast-2" okta_org_name = "atko" okta_base_url = "oktapreview.com" okta_scopes = [ "okta.groups.manage" ] okta_client_id = "0oaes123y1FekjfoE1d7" okta_private_key_id = "ievOgRgNc...aJJn5ra_4" okta_secret_id = "dev/okta_terraform_key" Connect Terraform code to Okta resources

The repository includes a directory module containing a resource okta_groups.tf, which we will use to provide a group for your Okta tenant. In doing so, we’ll also go through a core tenet of the previously mentioned variables, where we will define both input and output variables. This may be a little confusing initially, so take some time to understand how the different files and modules interact! The following diagram may help contextualize the various files we are going to step through:

Open terraform/modules/directory/variables.tf and uncomment the following entry. This is the variables file for the directory module and it defines which input variables are required. Each module you develop will have its own variables file.

variable "okta_group_name" { type = string }

Open terraform/modules/directory/okta_groups.tf and uncomment the following entry. This is a resource block. The resource block has two parts: firstly, the resource type, okta_group , and the resource name, okta_test_group. Feel free to change the resource block name (okta_test_group) to something you choose. Within the resource block body are the configuration arguments for the resource. We have one argument defined, which is the name, referencing the input variable okta_group_name

resource "okta_group" "okta_test_group" { name = var.okta_group_name }

Open terraform/variables.tf and uncomment the following entry. This is the variables file for the parent or main module. The variables within this file are assigned via the tfvars files, which are passed in with environment-specific configurations via the Terraform CLI:

variable "okta_group_name" { type = string }

Next, open terraform/main.tf and uncomment the following entry. The main file contains critical configurations for the backend and providers (like Okta or AWS). It also is where we reference any modules, including the directory module, via their path within the local repository. It’s also necessary to pass through any variables within this module block. You can manage variables in two ways:

Configure the variable values directly within the main file, which may be acceptable for any standardized or non-environment-specific variables Reference the parent module variables file like we have done, so in this example: okta_group_name = var.okta_group_name

Open terraform/dev.tfvars and terraform/prod.tfvars and uncomment the following entry. This sets the value of the okta_group_name variable for each respective environment. Feel free to change it and make the values environment-specific.

okta_group_name = "Okta Test Group GitHub Actions"

Now, we can stage our changes. Use git add to add the changes for the next commit.

git add .

Lastly, commit the changes:

git commit -m "Initial commit"

With the changes committed, we can now push your changes to the remote develop branch.

git push origin develop GitHub Actions triggers Terraform dev build

GitHub Actions is configured to trigger a build when changes are pushed to the develop branch. The workflow defined in the repository will:

Authenticate with AWS: Use GitHub OIDC to assume the necessary role. Run Terraform Commands: Execute terraform init, terraform plan, and terraform apply to deploy changes to the development environment.

Monitor the action in GitHub to ensure the build completes successfully, and check your Okta environment to observe the creation of the group using the name specified in the tfvars file.

If GitHub Actions has any errors, refer to the error message within the GitHub Actions workflow for further details.

If you missed any configurations within the repository files (e.g., backend-*.conf or vars-*.tfvars), make the changes locally and perform the git add, git commit, and git push commands again.

If you missed any configurations within Okta (e.g., OAuth 2.0 scopes) or AWS (e.g., IAM Role permissions, etc.), then correct the issue and re-run the GitHub Actions workflow from the GitHub Actions console on a failed workflow.

Create a pull request to merge code from the develop branch to the main branch:

Navigate to the repository on GitHub. Open a pull request from develop to main. Provide a detailed description of the changes and any context or considerations for the reviewers. GitHub Actions trigger Terraform prod plan

When a pull request is opened, GitHub Actions triggers a Terraform plan for the production environment. This plan will:

Authenticate with AWS: Use GitHub OIDC to assume the necessary role. Run Terraform Plan: Execute terraform init, terraform plan to show the potential changes without applying them against the production environment.

Reviewers can inspect the planned output to understand the impact of the changes before merging.

After reviewing and approving the pull request, merge it into the main branch. You can merge using the GitHub Pull Request user interface.

GitHub Actions trigger Terraform prod build

Merging to the main branch triggers a new GitHub Actions workflow. The workflow will:

Authenticate with AWS: Use GitHub OIDC to assume the necessary role. Run Terraform Commands: Execute terraform init, terraform plan, and terraform apply to deploy changes to the production environment.

Monitor the Actions tab to ensure the deployment completes successfully.

Learn more about Okta, Terraform, CI/CD patterns, and OAuth 2.0

In this article, we have outlined the architecture and steps needed to set up a secure and efficient CI/CD pipeline using GitHub Actions, Terraform, AWS, and Okta. By leveraging these technologies, we can automate infrastructure management, ensuring consistency and reducing the risk of manual errors. We covered the integration of GitHub with AWS for secure authentication and authorization, the configuration of Terraform for state management and secrets handling, and the overall workflow for deploying changes from development to production. If you found this post interesting, you may like these resources:

How to Secure Your Kubernetes Clusters With Best Practices How Can DevOps Engineers Use Okta? Store ASP.NET Secrets Securely with Azure KeyVault How to Deploy a .NET Container with AWS ECS Fargate

Stay tuned for subsequent articles for Okta recommended policies to help get you started with secure-by-design configurations from day one!

Remember to follow us on Twitter and subscribe to our YouTube channel for more exciting content. Leave us a comment below if you have any questions or requests for topics!


PingTalk

Achieving Zero-Impact Retail Identity Migration

Achieve a zero-impact retail identity migration. Learn how to seamlessly migrate your customer base to a new platform, enhance security, and boost customer satisfaction.

There’s little doubt that the COVID-19 pandemic and its aftereffects have had a dramatic impact on the retail sector, with many retailers seeing a marked uptick in traffic through their digital channels as consumers embrace the freedom of shopping from anywhere at any time. The eCommerce share of total sales continues to rise by roughly 7.5% annually, and hybrid shopping has taken off, with consumers shopping at least partially online 55% of the time.1 This increase in digital interaction tends to go hand in hand with a requirement for a modern, best-of-breed Customer Identity and Access Management (CIAM) platform in order to truly capitalize on the upsell and cross-sell opportunities that come from increased consumer stickiness and personalization.

 

While many retailers agree that legacy identity systems – often tightly coupled with existing CRM or ecommerce platforms – reduce agility and negatively impact user experience, it can seem daunting to embark on a migration project, particularly when the impacts of migrating a large existing customer base to that new platform are considered. Nevertheless, migration is often a necessary first step in fulfilling other key digital transformation initiatives.

 

Read on to see how retailers can alleviate these concerns and tackle the migration process with confidence – without leaving a single customer behind.

Thursday, 10. October 2024

1Kosmos BlockID

Unmasking the MGM Resorts Cyber Attack: Why Identity-Based Authentication is the Future

What if the key to your hotel room was suddenly useless, and your personal information was floating in the digital ether? This happened after the recent MGM Resorts cyberattack; the vulnerabilities in our current cybersecurity measures have been laid bare. The incident disrupted the company’s services and jeopardized sensitive customer data, raising serious questions about … Continued The post U

What if the key to your hotel room was suddenly useless, and your personal information was floating in the digital ether? This happened after the recent MGM Resorts cyberattack; the vulnerabilities in our current cybersecurity measures have been laid bare. The incident disrupted the company’s services and jeopardized sensitive customer data, raising serious questions about the efficacy of traditional security protocols.

We will dive into the MGM Resorts cyberattack to uncover the shortcomings in current cybersecurity measures. You’ll also understand why traditional security measures are failing us and how adopting identity-based authentication can safeguard your organization against increasingly sophisticated cyber threats.

Unpacking the Security Incident

The MGM Resorts cyberattack represents a significant breach in cybersecurity measures, disrupting the company’s services and jeopardizing customer data. The event exposes critical vulnerabilities in organizations that may appear otherwise secure and highlights the urgent need for enhanced cybersecurity protocols.

What Happened?

On September 11, 2023, MGM Resorts announced they were experiencing a “cybersecurity issue,” which turned out to be more devastating than initially perceived. The company had to shut down various services, ranging from digital room keys to slot machines, resulting in many inconveniences for its guests.

Timeline of Events

The company detected the breach and shut down the affected systems to mitigate further damage. Over the next few days, guests and employees faced many problems. Websites for MGM’s numerous properties were offline, and the organization had to resort to manual operations, such as handwritten receipts for casino winnings and long lines for room check-ins.

Data Types Affected

The breach compromised a wide variety of data types, potentially including customer personal information and financial details. While the specifics are not yet fully disclosed, the scope of affected data types suggests that the breach could have far-reaching implications for the resort’s guests.

Affected Stakeholders

The cyberattack has consequences that extend beyond MGM Resorts. Guests, employees, and possibly even shareholders are affected by the breach. Customers are particularly concerned about their personal and financial information safety, and employees face the challenge of maintaining operations under compromised conditions.

Why Did It Happen?

The cyberattack was orchestrated using social engineering tactics, specifically vishing, which involves manipulating individuals into divulging confidential information over the phone. These tactics exploited human vulnerabilities rather than technological ones, making the attack highly effective.

Vulnerabilities Exploited

The primary vulnerability lies in MGM’s human capital. The attackers used publicly available information and a convincing phone manner to gain unauthorized access to MGM’s systems. This form of social engineering underscores the need for better staff training and awareness to prevent future attacks. But it brings to light a deeper issue. Security teams need a way to take end users out of the critical path of cyberattacks and maintain continuous control of network security. But how?

Shortcomings in Current Authentication Systems

MGM’s existing authentication protocols were inadequate in preventing a vishing-based attack. The attackers impersonated an MGM employee by calling the IT service desk to obtain the necessary credentials. This exposes critical flaws in the company’s verification systems, raising questions about the efficacy of its cybersecurity measures.

The Inadequacies of Half-Measures in Authentication

As seen from the MGM cyber attack, relying on single-factor authentication is a glaring example of outdated security. This method must be revised today when cyber threats are increasingly sophisticated.

Although a step in the right direction, multi-factor authentication can fall short if not implemented correctly. For instance, using easily accessible information as a second factor, like a text message sent to a phone, can be intercepted and exploited.

The evolution of security measures has brought us from simple passwords to biometrics and beyond. Yet, many businesses are stuck in the past, relying on these half-measures.

It’s not just about keeping up with the times; it’s about safeguarding your organization’s future. One-size-fits-all solutions are ineffective, and risk-based authentication should be the norm, not the exception.

Why They Are Insufficient

Security half-measures, like using codes, devices, or unverified biometrics as identity proxies, are more than just weak points; they open doors for cybercriminals. The MGM breach is a stark reminder of the dangers of compromised security. Beyond financial loss, the real cost lies in eroded customer trust and potential legal consequences that can linger and deeply affect your business.

Why Businesses Must Move Beyond

Inadequate security comes with a hefty price tag, and it’s about more than immediate financial losses. Legal consequences and regulatory fines can cripple a business. However, the real challenge lies in restoring customer trust once lost. The MGM breach underscores the urgency for businesses to upgrade from outdated security protocols.

It’s time for businesses to take this issue seriously and invest in robust, up-to-date security protocols that adapt to emerging threats.

The Case for Identity-Based Authentication Advantages of Identity-Based Authentication

Identity-based authentication is not just a feature; it’s a paradigm shift in how we, 1Kosmos, approach security. Traditional methods often rely on something the user knows, like a password, which is vulnerable to attacks. Our platform, however, offers a more advanced and secure approach.

User Convenience

One of the standout features of our platform is the convenience it offers to users. Physical attributes used for biometric verification, such as fingerprints or iris scans, are not vulnerable to damage or unexpected alterations. This ensures a swift and user-friendly authentication process, eliminating the need for cumbersome passwords or other traditional methods.

Flexible Use Cases

We understand that different scenarios demand different authentication methods. Whether the situation calls for a fingerprint, iris scan, voice match, or any other biometric identifier, our platform provides adaptable and affordable authenticators. This flexibility ensures that the authentication methods can adjust seamlessly as business needs evolve.

Empirical Data Supporting the Case

The effectiveness of our identity-based authentication is not theoretical; it’s proven. With  industry leading Target Accept Rate (TAR) and False Accept Rate (FAR), our platform offers high accuracy in identity verification. 

Our capabilities and certifications to NIST, FIDO2, and iBeta are a testament to the system’s capability to reduce the risk of unauthorized access drastically. Our capabilities mean fewer false positives and negatives, streamlining the user experience while maintaining a high level of security.

How Using 1Kosmos Helps Adopt A Proactive Approach Multi-factor Authentication

Our multi-factor authentication system, bolstered by the integration of LiveID, is designed to be non-phishable, directly addressing one of the most common vulnerabilities that cybercriminals exploit.

What sets our multi-factor authentication apart? With LiveID, cybercriminals find themselves at a dead end. There’s nothing to steal, even if they attempt to compromise any factor. This unique feature ensures that our MFA system remains solid, secure and virtually impenetrable, providing an unparalleled layer of protection.

This is a game-changer for businesses striving to maintain high-security levels without compromising user convenience. By harnessing the power of LiveID, organizations can deploy a formidable defense against threats while ensuring a seamless user experience.

Biometric Encryption

Biometrics in our system are more than just a security feature; they’re foundational. Our liveness detection ensures users are genuine humans, guarding against bots or deepfake attempts essential for tasks like privileged access management and employment verification. But we elevate this with biometric encryption, where a biometric template and a public-private key pair work to encrypt and decrypt personal data, making unauthorized access nearly impossible.

BlockID stands out with its decentralized approach. Instead of central storage, user biometrics are kept private, minimizing attack surfaces. This bolsters security and ensures that biometrics requiring minimal user training can be swiftly and seamlessly integrated across organizations.

Compliance

Our platform, anchored in a private blockchain, is designed with user privacy as a forethought, not an afterthought. We don’t just align with GDPR, CCPA, and CPRA – we aim higher. 1Kosmos BlockID secures user personal information, easing GDPR-related challenges and cementing user trust. Additionally, our transparent log provides a clear “chain-of-custody,” which is invaluable for investigations relating to external threats or internal “friendly fraud” scenarios. In all aspects, our focus is to earn trust and ensure integrity.

The cyberattack on MGM Resorts highlights the pressing need for businesses to modernize their security approaches. And it can happen to anyone. While legacy systems were once the pinnacle of security, they now have fundamental limitations in the face of evolving threats. 1Kosmos BlockID advanced, identity-based verification and authentication platform, which are readily available, is user-friendly, private by design and represents the forefront of contemporary security solutions. The challenge isn’t the technology but the organizational shift in perspective. Contact us today to discover how 1Kosmos BlockID can strengthen your security posture.

The post Unmasking the MGM Resorts Cyber Attack: Why Identity-Based Authentication is the Future appeared first on 1Kosmos.


KuppingerCole

Identity Fabrics

by Alejandro Leal Explore Identity Fabrics: the key to secure, scalable IAM, bridging legacy and modern systems in digital transformation. Learn more about how to select the solution that is right for you in our buyer's guide.

by Alejandro Leal

Explore Identity Fabrics: the key to secure, scalable IAM, bridging legacy and modern systems in digital transformation. Learn more about how to select the solution that is right for you in our buyer's guide.

Nov 14, 2024: Understanding the Impact of AI on Securing Privileged Identities

Understanding the impact of AI on securing privileged identities has become a critical concern in today's rapidly evolving cybersecurity landscape. As artificial intelligence continues to advance, it presents both opportunities and challenges for organizations striving to protect their most sensitive access points. The rise of AI-powered threats has significantly altered the traditional identity at
Understanding the impact of AI on securing privileged identities has become a critical concern in today's rapidly evolving cybersecurity landscape. As artificial intelligence continues to advance, it presents both opportunities and challenges for organizations striving to protect their most sensitive access points. The rise of AI-powered threats has significantly altered the traditional identity attack chain, requiring a fundamental shift in how we approach privileged identity security.

KILT

Expanding Horizons: KILT Token’s First Move Towards Multi-Chain With Ethereum

We are thrilled to announce an exciting new development for KILT, its community and all Polkadot parachains: a seamless token bridge to Ethereum with the Project Polar Path, developed by the KILT Core Team. Thanks to Polar Path, the KILT token is making its first leap beyond Polkadot and onto the Ethereum blockchain. This marks a crucial milestone in our journey towards a true multi-chain fu

We are thrilled to announce an exciting new development for KILT, its community and all Polkadot parachains: a seamless token bridge to Ethereum with the Project Polar Path, developed by the KILT Core Team.

Thanks to Polar Path, the KILT token is making its first leap beyond Polkadot and onto the Ethereum blockchain. This marks a crucial milestone in our journey towards a true multi-chain future, bringing more flexibility and exposure to KILT. In the near future, we plan to extend to even more blockchain networks, but today, we want to introduce the first step in this broader vision.

About Project Polar Path

Project Polar Path is a breakthrough feature for Polkadot parachains, enabling seamless token switches between Polkadot and Ethereum. At its core, Polar Path is a pallet specifically designed for Polkadot parachains. It has already been implemented on the KILT blockchain, leveraging Snowbridge — a secure, trustless bridge connecting Polkadot and Ethereum.

Snowbridge has already allowed ERC-20 tokens to cross over from Ethereum to Polkadot, but so far, the reverse has been a challenge. That’s where Polar Path comes in. It enables the conversion of native parachain tokens (like KILT) into an Ethereum-compatible version (ERC-20) and vice versa, providing a solution for parachains looking to expand their presence on Ethereum.

General Parachain Token Conversion vs. KILT’s Implementation

Polar Path’s ability to switch parachain tokens into ERC-20 tokens can be used by any parachain on Polkadot. However, the implementation for KILT includes a key distinction: when switching KILT tokens between Polkadot and Ethereum, the total supply of KILT always remains constant. For each KILT token that exists on the KILT blockchain, there is exactly one KILT ERC-20 token on Ethereum, and vice versa.

Whenever a KILT token is switched to an ERC-20 KILT token on Ethereum, the corresponding KILT token on Polkadot is locked and unavailable. Conversely, when the ERC-20 KILT token is sent back to Polkadot, it is locked and unavailable, and the original KILT token is unlocked. This ensures a one-to-one relationship between KILT on Polkadot and its bridged ERC-20 counterpart on Ethereum, maintaining the overall token supply and preventing inflation or duplication of tokens across chains.

Why Polar Path Matters

Expanding to Ethereum and other ecosystems creates significant opportunities for KILT and similar parachain projects:

Increased Exposure: Integrating with Ethereum connects KILT tokens to one of the largest decentralized finance (DeFi) ecosystems, allowing holders to access a broad array of DeFi products and services. Wider Reach: Ethereum’s vast developer and user community enhances KILT’s visibility, driving broader recognition and adoption within the crypto space. Trustless, Secure Switches: Thanks to Snowbridge’s architecture, Polar Path ensures all token switches between networks remain secure and trustless, preserving user confidence without sacrificing security. Visual Interface for Token Switching

To make the process user-friendly, the Galani Projects team, a valued contributor to the Polkadot community, has developed a proof-of-concept web interface. With this tool, users can seamlessly switch their KILT tokens to ERC-20 tokens using wallets connected to both networks. The interface makes it easy for users to specify the amount, source, and destination accounts, and with a few clicks, they can complete the switch.

In the future, Galani Projects plans to make this code open-source, enabling other parachain projects to use the interface for their tokens as well.

Overcoming Development Challenges

One of the key challenges in developing Polar Path was navigating Polkadot’s Cross-Consensus Message Format (XCM) and the associated fees. To simplify the user experience, we devised a solution that allows users to pay XCM fees in DOT by sending them to the KILT sovereign account on Polkadot’s AssetHub. Once the DOTs arrive, they are burned, and an XCM callback is triggered automatically. This process takes only about 30–40 seconds and eliminates the need for users to interact directly with KILT for fee payments.

Future Plans for KILT: Multi-Chain Expansion

Our Ethereum expansion is just the beginning. We are actively working on extending KILT’s multi-chain presence to other prominent blockchain networks. The vision is to have KILT tokens accessible across various ecosystems, offering even more flexibility and opportunities for our community.

Stay tuned as we continue to roll out updates and new features that will further enhance the flexibility, security, and functionality of KILT tokens across different networks!

You can learn how to bridge your KILT tokens here.

About KILT Protocol

KILT is an identity blockchain for generating decentralized identifiers (DIDs) and verifiable credentials, enabling secure, practical identity solutions for enterprises and consumers. KILT brings the traditional process of trust in real-world credentials (passport, driver’s license) to the digital world while keeping data private and in possession of its owner.

Expanding Horizons: KILT Token’s First Move Towards Multi-Chain With Ethereum was originally published in kilt-protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Switch Polkadot and Ethereum Tokens with Polar Path

The KILT team is pleased to announce a new “Polar Path” feature that allows users to switch tokens between Polkadot parachains and the Ethereum network. Polar Path is a pallet for Polkadot parachains, already implemented on the KILT chain, that allows parachains to make their native token accessible on the Ethereum network. The Snowbridge-based web app uses the Polar Path pallet to provide a

The KILT team is pleased to announce a new “Polar Path” feature that allows users to switch tokens between Polkadot parachains and the Ethereum network. Polar Path is a pallet for Polkadot parachains, already implemented on the KILT chain, that allows parachains to make their native token accessible on the Ethereum network.

The Snowbridge-based web app uses the Polar Path pallet to provide a front end that lets you visually switch tokens between parachain and ERC-20 tokens and transfer switched tokens between Ethereum and parachain networks via Snowbridge.

Project Polar Path is funded by the Polkadot community and developed by the KILT core team. This guide shows how to use the KILT parachain, but the web app will support other parachains in the future.

Prerequisites

To use Polar Path and Snowbridge, you need a Polkadot wallet and Ethereum wallet extension added to your browser. You need to set up the accounts and have sufficient funds in each account to cover the transaction fees.

Set up wallets Metamask Install the metamask extension Create an account Import the KILT token by clicking + Import tokens, paste “0x5d3d01fd6d2ad1169b17918eb4f153c6616288eb” into the Token contract address field, and click Next. Polkadot Wallet Install a Polkadot wallet, e.g polkadot{.js} extension, Talisman, Subwallet etc. Create a new account if you don’t already have one. Switching and Transferring Switching between Asset Hub and KILT

To switch from KILT to Asset Hub, you need:

Any Polkadot wallet which allows an Asset Hub account with a connected KILT account, KILT tokens, and DOTs on KILT and Asset Hub to pay XCM fees. Asset Hub account, with the same address as your KILT account and DOTs for the existential deposit.

To switch from Asset Hub to KILT, you need:

Any Polkadot wallet with a connected Asset Hub account, KILT tokens on AssetHub, and DOTs to cover transaction fees Asset Hub account, which is the same address as your KILT account and DOTs. Transferring tokens between Asset Hub and Ethereum

To transfer from Asset Hub to Ethereum, you need:

Asset Hub account, which is the same address as your KILT account and enough DOT to cover transaction fees, KILT tokens on Asset Hub Ethereum Wallet, with an account added

To transfer from Ethereum to Asset Hub, you need:

Ethereum wallet, with an account added and enough ETH to cover transaction fees on Ethereum mainnet Asset Hub account, with enough DOT to cover the existential deposit Switching Tokens

Go to app.snowbridge.network

To switch between KILT and ERC-20 tokens, select the Polar Path tab. Click Connect Polkadot to connect the app to your wallet extension.

Choose the source and destination networks, and the app loads accounts from the connected wallets to show the Source Account options. The Beneficiary account is always identical to the Source Account. As you change the source or destination network, the other drop-down menu adjusts accordingly to match the appropriate opposite source or destination.

Finally, set the amount. The app now estimates the transfer and XCM fees. If you don’t have sufficient DOT on KILT to cover the XCM fees, don’t worry — clicking the submit button opens a pop-up that lets you transfer some of your DOT from Asset Hub to KILT.

Click the Submit button, sign the transaction in your wallet, and wait for the switch to complete.

Transferring Tokens

Go to app.snowbridge.network

Select the Transfer tab to transfer switched tokens between Ethereum and Asset Hub. Click Connect Ethereum to connect the app to your wallet extension and set the Source Account.

Select the Beneficiary account via the dropdown, which populates from the connected Polkadot wallet.

Finally, set the amount and select KILT as the token you want to transfer. The app now estimates the transfer fees. If you don’t have sufficient ETH to cover the fees, don’t worry — clicking the submit button opens a pop-up that lets you transfer some of your ETH.

Click the Submit button to initiate the transfer.

Note: Transfers to Asset Hub take around 20 minutes to complete. Transfers to Ethereum can take up to 40 minutes.

Switch Polkadot and Ethereum Tokens with Polar Path was originally published in kilt-protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Ocean Protocol

DF110 Completes and DF111 Launches

Predictoor DF110 rewards available. DF111 runs Oct 10— Oct 17, 2024 1. Overview Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor. Data Farming Round 110 (DF110) has completed. DF111 is live today, Oct 10. It concludes on October 17. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE re
Predictoor DF110 rewards available. DF111 runs Oct 10— Oct 17, 2024 1. Overview

Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor.

Data Farming Round 110 (DF110) has completed.

DF111 is live today, Oct 10. It concludes on October 17. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF111 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF111

Budget. Predictoor DF: 37.5K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and Predictoor

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF110 Completes and DF111 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wednesday, 09. October 2024

paray

The Need to Comply With the CTA comes Into Focus

October 8, 2024 was a bellwether date for those waiting on a court to clarify whether the statutory requirement for filing BOI Reports sits on solid ground.  It was on October 8, 2024 when the oral argument in the pending Eleventh Circuit appeal from Small Bus. United d/b/a Nat’l Small Bus. Ass’n v. Janet Yellen, … Continue reading The Need to Comply With the CTA comes Into Focus →
October 8, 2024 was a bellwether date for those waiting on a court to clarify whether the statutory requirement for filing BOI Reports sits on solid ground.  It was on October 8, 2024 when the oral argument in the pending Eleventh Circuit appeal from Small Bus. United d/b/a Nat’l Small Bus. Ass’n v. Janet Yellen, … Continue reading The Need to Comply With the CTA comes Into Focus →

KuppingerCole

Adopting Passwordless Authentication

As businesses shift to more flexible work models, traditional password systems pose security risks and inefficiencies. The session will provide insights from recent KuppingerCole research, offering a comprehensive view of the evolving enterprise security landscape. Join our webinar to explore the transformative potential of passwordless authentication solutions within modern enterprises. As busi

As businesses shift to more flexible work models, traditional password systems pose security risks and inefficiencies. The session will provide insights from recent KuppingerCole research, offering a comprehensive view of the evolving enterprise security landscape.

Join our webinar to explore the transformative potential of passwordless authentication solutions within modern enterprises. As businesses expand and adopt more flexible work models, the inefficiencies and security risks of traditional password systems are increasingly apparent. This session will introduce market trends and insights based on the latest KuppingerCole research, providing a well-rounded perspective on the current and future landscape of enterprise security solutions.

As an expert in digital identity and cybersecurity, KuppingerCole analyst Alejandro Leal will guide attendees through the evolving landscape of passwordless authentication. He will highlight the key features common to various market solutions, along with recent developments and future trends. Drawing from extensive research, Alejandro will emphasize the critical importance of user-friendly and secure authentication methods. He will focus on practical steps organizations can take to effectively implement these technologies, enhancing their security posture and operational efficiency.




Trinsic Podcast: Future of ID

Rohan Pinto - 1Kosmos's Journey from Blockchain to Passwordless Authentication

In this episode of The Future of Identity Podcast, I’m joined by Rohan Pinto, Co-founder and CTO of 1Kosmos, a company at the forefront of decentralized identity and passwordless authentication solutions. We explore the evolution of identity management and the journey from blockchain-based beginnings to building secure, user-controlled identity systems that go beyond traditional centralized approa

In this episode of The Future of Identity Podcast, I’m joined by Rohan Pinto, Co-founder and CTO of 1Kosmos, a company at the forefront of decentralized identity and passwordless authentication solutions. We explore the evolution of identity management and the journey from blockchain-based beginnings to building secure, user-controlled identity systems that go beyond traditional centralized approaches.

We dive into several key topics, including:

- Rohan’s background in identity and access management, and his transition into building cryptographic solutions that emphasize user control over their identities.
- The role of blockchain as an enabler in identity verification and why it’s not the complete solution to today’s identity challenges.
- 1Kosmos’s unique approach to authentication, including their pivot from blockchain to passwordless access using biometric verification.
- The challenges and potential of user-controlled identity and verifiable credentials, and why widespread adoption has been slower than expected.
- Rohan’s perspective on the future of identity, including how decentralized identifiers and biometrics will reshape how we access systems and interact with digital services.

Rohan shares insights from his new book and offers a deep dive into the complexities and opportunities of building a more secure, user-centric identity ecosystem. This episode is a must-listen for anyone interested in the future of identity, security, and the evolving digital landscape.

You can learn more about 1Kosmos at 1kosmos.com.

Subscribe to our weekly newsletter for more announcements related to the future of identity at trinsic.id/podcast

Reach out to Riley (@rileyphughes) and Trinsic (@trinsic_id) on Twitter. We’d love to hear from you.


Ockto

Consumer Credit Directive 2 (CCD2): Dit is waar kredietverstrekkers mee te maken krijgen

De consumptief kredietmarkt staat aan de vooravond van een belangrijke verandering met de aangescherpte Consumer Credit Directive 2 (CCD2). Deze nieuwe Europese richtlijn zal impact hebben op de manier waarop kredietverstrekkers opereren en nieuwe partijen onder toezicht brengen die daar nu nog buiten vallen. De CCD2 heeft als doel de consumentenbescherming te versterken en een gelijk s

De consumptief kredietmarkt staat aan de vooravond van een belangrijke verandering met de aangescherpte Consumer Credit Directive 2 (CCD2). Deze nieuwe Europese richtlijn zal impact hebben op de manier waarop kredietverstrekkers opereren en nieuwe partijen onder toezicht brengen die daar nu nog buiten vallen. De CCD2 heeft als doel de consumentenbescherming te versterken en een gelijk speelveld te creëren voor kredietaanbieders in Europa.


CCD2 en de verhoogde regeldruk bij consumptief krediet

Deze aflevering van de Data Sharing Podcast gaat over de Consumer Credit Directive 2 (CCD2). Hidde gaat in gesprek met Earvin van Ginkel, senior beleidsmedewerker bij de Vereniging voor Financieringsondernemingen Nederland (VFN).

Deze aflevering van de Data Sharing Podcast gaat over de Consumer Credit Directive 2 (CCD2). Hidde gaat in gesprek met Earvin van Ginkel, senior beleidsmedewerker bij de Vereniging voor Financieringsondernemingen Nederland (VFN).


Ocean Protocol

bci/acc: A Path to Balance AI Superintelligence

bci/acc: A Pragmatic Path to Compete with Artificial Superintelligence An e/acc zoom-in on brain interfaces, towards human superintelligence Summary Artificial superintelligence (ASI) is perhaps 3–10 years away. Humanity needs a competitive substrate. BCI is the most pragmatic path. Therefore, we need to accelerate BCI and take it to the masses: bci/acc. How do we make it happen? We’ll need BCI
bci/acc: A Pragmatic Path to Compete with Artificial Superintelligence An e/acc zoom-in on brain interfaces, towards human superintelligence Summary

Artificial superintelligence (ASI) is perhaps 3–10 years away. Humanity needs a competitive substrate. BCI is the most pragmatic path. Therefore, we need to accelerate BCI and take it to the masses: bci/acc. How do we make it happen? We’ll need BCI killer apps like silent messaging to create market demand, which in turn drive BCI device evolution. The net result is human superintelligence (HSI).

bci/acc draws on today’s technologies without requiring big scientific breakthroughs. It’s e/acc zoomed-in for BCI. It’s solarpunk: optimistic and perhaps a little gonzo. And it could be a grand adventure for Humanity.

Based on talks at Foresight Institute in Dec 2023 & Nov 2023 [video] and NASA Oct 2023. They extend this 2016 blog post, and this 2012 talk at BrainTalks@UBC.

=== Contents ===

1. Introduction

2. Artificial Superintelligence
2.1 How market forces drive ASI
2.2 The journey to artificial superintelligence
2.3 ASI Risk
2.4 Approaches to ASI Risk
- Decelerate -> let evolution happen -> speed it up (e/acc)
- Cage -> fancier cage
- Align post-hoc -> dumb-to-smart chain -> during training
- Get competitive (bci/acc)

3. Human Superintelligence, via bci/acc
3.1 Introduction
- High-bandwidth BCI challenges
- Implants-first vs masses-first
3.2 Baseline tech for bci/acc
- EEG for typing; for focus, more
- Glasses with subtitles; with voice interface
- AR Goggles + hand gestures: Meta Quest 3
- AR Goggles + eye-tracking: Apple Vision Pro
- Eye-tracking is BCI
3.3 BCI killer apps
- Silent messaging; internal dialog
- Perfect memory; share visual memories
- Talk in pictures; talk in brain signals
3.4 The journey to high-bandwidth BCI
- Bandwidth++ via implants; via optogenetics. Bike-shedding.
- Invasive BCI into mainstream -> growth
- Your BCI will be part of *you* -> hyper-local alignment
3.5 The journey to human superintelligence
3.6 Cognitive liberty

4. Conclusion
5. Appendix 1. Introduction

It was summer 1995. In the pages of Wired magazine, I read about a new product called MindDrive: “The first computer product operated by human thought”. I was skeptical. But I had to try it! So I dropped $150 and got one.

I’d slip the MindDrive on my index finger, and boot up into the game “MindSkier”. I’d ski downhill in first-person view, and try to steer between the 30 or so pairs of gates. I’d steer by “thinking”. It was actually an echo of my thoughts: the device’s gold-plated sensor tracked my skin conductivity (GSR). I would miss about 30% of the gates, compared to missing 80% of them if the device wasn’t on my finger at all. It worked, barely. A starting point for the next!

Left: the MindDrive. Right: In Rosie Revere, Engineer, Rosie’s great aunt teaches her a brilliant lesson.

At a giant engineering science fair, I set up the MindDrive for anyone to try. There was a line around the block [Spec1999]. There was a latent interest in BCI.

In 2001, I splurged $2K and bought an “Interactive Brainwave Visual Analyser” (IBVA). I’d wear a blue headband holding sticky electrodes to sense electrical signals on my forehead, i.e. an electroencephalogram (EEG). It sent the EEG signals to my computer, which got displayed as animated 3d graphics. More usefully, I could access the signals directly with my own software — so I did. I could hack BCI! Alas, it was hard to get good signals. I also tried OCZ NIA and Emotiv EPOC later on, but they weren’t qualitatively better.

From these limited experiments — and adjacent work in AI and analog circuits — I had a feeling that BCI bandwidth could be optimized a lot. This 2012 work from Tsinghua University confirmed my hunch, achieving moderate typing speeds [Tsh2012]. A decade of optimizing later, we’re now at 62 words per minute (very good).

High-bandwidth BCI is not a scientific mystery; it’s an engineering problem.

Why might we be interested in high bandwidth BCI?

The answer is artificial superintelligence (ASI): AI machines with 1000x+ the cognitive ability of humans. ASI may happen as soon as 3-10 years from now. Market forces are pushing it into existence because there’s a lot of money at stake.

How do we, as humans, have a role in a world of AI machines with 1000x our cognitive abilities?

Humans need a substrate that’s actually competitive to ASI: silicon. The best way to do that is brain-computer interfaces (BCIs). We’ve got to do this soon enough for ASI time scales, therefore, we need to accelerate BCI and get mass adoption. The net result will be human superintelligence (HSI).

The rest of this article has two sections:

Artificial superintelligence (ASI): what’s driving ASI, ASI risk, and approaches to address risk. Human superintelligence: how to accelerate BCI and achieve mass adoption, to get Humanity competitive with ASI. 2. Artificial Superintelligence (ASI) 2.1 How market forces drive ASI

Market forces have been driving AI compute up. The plot below shows how the compute for AI training has risen, from about 1950 until now (2024). The y-axis is exponential, and each tick is another order of magnitude. Therefore while the drawn curve is linear, the trend is exponential.

Market forces are driving AI compute up. [Graph from LessWrong.com, with my 20 PFLOPs overlay]

The compute has grown quickly. It started with 100 (1⁰²) floating point operations per second (FLOPs) in 1950, to 1⁰²⁴ now. That’s 22 orders of magnitude of compute power in three-quarters of a century. To intuit just how much growth this is: it’s the difference between 1mm, vs flying to Alpha Centauri and back 10,000 times.

From this growth, we now have a lot of compute. To help intuition: George Hotz frames 20 PetaFLOP/s as “1 person” worth of brainpower (compute). This is akin to 746 Watts being “1 horse” worth of power (1 hp). Just as it’s easier to reason about horses worth of power, it’s easier to reason about persons worth of compute. We surpassed “1 person” worth of compute in about 2012. Now we’re 10 million times beyond; it’s like all the brainpower of NYC rolled into one compute system.

Market forces have driven compute up because it meant more money. More compute unlocked more markets, each which was highly lucrative: from space & radio to TV, from the PC to the cellphone, from the smartphone to AI now and AR/VR soon. AI has a voracious appetite for compute, with $ benefits that accrue. That’s why there’s so much money flowing into AI right now, and no sign of abating.

2.2 Path to ASI

For decades, we’ve had AIs that can do tasks that only a human could previously do. That is, narrow AI. Examples are antenna design and analog circuit synthesis. Almost as long, we’ve had AIs that can do a task at a level far exceeding a human. These are also called narrow AI. Examples are digital circuit synthesis and software compilers.

We’re about to get AI that can do all tasks that only a human can previously do. That is, artificial general intelligence (AGI). To riff on Paul Graham, AIs have will have progressed from “smart” (good at one thing) to “wise” (decent at everything).

Market forces will drive AGI from 1x smarter than humans, to 2x, to 10x, then 100x, then 1000x. It will happen quickly: there is $ to be made. We’ll arrive at AI that can do all tasks at a level far exceeding any human. That is, artificial superintelligence (ASI).

ASIs will be wildly smarter than humans. In humans 2 is an idiot and 6 is an Einstein; so what is 1000 or 1,000,000? [Rutt2024a] It’s such a difference that it’s hard to imagine as a possibility; this cognitive dissonance will prevent most people from truly realizing this until it’s right upon them.

ASIs will be wildly smarter than humans [From @AiSafetyMemes] 2.3 ASI Risk

Humans are 1000x+ smarter than ants. As humans, we don’t respect the rights of ants or “what the ants have to say”. We are their gods.

ASIs will be 1000x+ smarter than humans. We are now the ants. There is little guarantee that ASIs will respect our rights. This is ASI risk.

What will it feel like? God-like intelligence will beget god-like power: the ASIs will become our gods. In the Hyperion sci-fi series, ASIs exist, yet humans still barely comprehend them, except to know that ASI power is unimaginably vast [Hyp1989].

What can we do about ASI risk? Section 2.4 reviews various ideas.

2.4 Approaches to ASI Risk 2.4.1 Idea: decelerate

Yudkowsky and others advocate to slow down or pause AI progress, then figure out how to solve ASI risk. It’s highly appealing at first glance. As with all such ideas: one must be careful, because wishing doesn’t make it true.

Alas, there is a problem: for such a deceleration to work, all deceleration efforts would need to be successful. If even just one entity defects, they could dominate the others. And that’s why this route likely won’t happen. There’s an AI race; at the core, it’s China vs USA, and there’s too much at stake for one side to cede speed to the other. So the race will go on. It’s like nuclear: for all the disarmament theatre, we still have the nukes.

Alas, what might happen is deceleration for all players except the US government and Chinese government (plus their proxies), and organized criminals. This hurts human freedom because it diminishes “voice” and “exit” for individuals, not to mention freedom to work on solving ASI risk 🤦 [Verd2023]. It’s a common trick for governments to use the banner of safety to take further control [Snow2013].

“They who can give up essential liberty to obtain a little temporary safety, deserve neither liberty nor safety.” — Benjamin Franklin

Perhaps most importantly, this “approach” doesn’t actually address the problem of ASI risk. That is, if AI was decelerated, then we’d still have to solve the core problem of ASI risk! That’s what most other ASI-risk approaches aim to do.

2.4.2 Idea: let evolution happen

“Let evolution happen” is the framing of Google founder Larry Page, and many others. They see humans as simply one step in the tree of evolution; that ASI is the next step; that we should be proud that we made the next step happen; and that if our biological bodies can’t compete (they can’t) then we should let go and get over it; that this is evolution.

From my work on evolutionary computation, I’ve seen how powerful evolution can be. It doesn’t matter whether we like this framing, this really could be the scenario that happens.

However, letting go is not a solution to ASI risk. Personally, I’d love to keep building and playing for as long as I can, in a grand adventure, until I opt-in to end that adventure. While people have invented a thousand rationalizations for death, I choose life until further notice. Humanity should be the same. We have a potential grand adventure in front of us! So we should rage, rage against the dying of the light. Humanity should choose life until further notice.

2.4.3 Idea: speed it up (e/acc)

Effective accelerationism” (e/acc) is a movement sparked by @BasedJeffBezos and @BayesLord, and extended & promoted by technologist / VC Marc Andreesen, among others. I find myself aligned with most of e/acc philosophy: grounded in physics, optimistic, build-up not tear-down, and more.

e/acc’s approach to AI is “let everyone have at it, speed it up”. It aims for a multi-polar AI world: thousands (or millions or billions) of superintelligent AIs or entities with superintelligent AIs, keeping each other in check. It’s a bit like the USA which balances power among three entities (legislative, executive, judiciary). Or, it’s like blockchains which balance power among thousands of nodes.

Therefore, perhaps surprisingly, e/acc is likely safer than the “deceleration” approach (which only has balance among two powers) 😲!

e/acc is also open to human superintelligence (HSI), but with no no special emphasis. It’s meant to be an umbrella idea, for others to add detail with zoom-ins.

Vitalik Buterin’s “decentralized accelerationism” (d/acc) zooms in on e/acc that emphasizes decentralized technologies with a bit more bias towards safety. Like e/acc, it’s open to HSI, though with no special emphasis.

Among those thousand or billion+ superintelligent AI entities, e/acc assumes that at least some of them will be friendly to humans; and that they will help humans have a role in the future. But what if the friendly ones are overruled by the unfriendly ones? And as the ASI risk introduction covered, why would gods bother treating ants well?

Fortunately, e/acc is sufficiently broad that it allows for variants not needing this assumption. The most promising variant is: use BCI to get a competitive substrate, with mass adoption. That’s bci/acc! This post will elaborate below.

2.4.4 Idea: put it in a cage, unplug if things go awry

First, some background. You can think of Bitcoin as a really dumb robot that does just one thing: maintain a ledger of transactions. Yet it’s also sovereign: it answers to no one, it is its own independent entity, you can’t unplug it. Similarly, Ethereum is sovereign. The Uniswap V2 decentralized exchange contracts running on Ethereum are sovereign too: they answer to no one. Arweave permanent data storage is sovereign. Ocean Predictoor AI-powered data feeds are sovereign. Every smart contract that doesn’t have governance is sovereign. Finally, the internet itself is sovereign. Building sovereign software systems is a solved problem. Appendix 5.1 elaborates.

With that background in place, let’s review the idea: “put the ASI in a cage, and unplug if things go awry”.

Here’s one problem: you can’t unplug it. The ASI is smart, so it’s already made itself decentralized, therefore sovereign, therefore un-unpluggable. Just like Bitcoin.

Some observers see this idea and other similarly glib “takes” as a waste of energy. The “AI Alignment Bingo” in Appendix 5.2 offers a concise (and hilarious) summary of many takes & responses.

2.4.5 Idea: fancier cage

The idea is to use advances in cryptography, blockchain, and more to make the cage “hack proof”. Sergey Nazarov of Chainlink is a proponent, among others.

The problem: humans are the weak link in computer systems. Hackers like Kevin Mitnick have made Swiss cheese of compute systems by tricking gullible humans to give him access, not by attacking the software or hardware directly. Therefore, the “fancier cage” idea is not feasible unless we 100% solve human gullibility (not going to happen).

Loki after tricking Thor to escape a fancy cage: “Are you ever not going to fall for that?” [Avengers 1] 2.4.6 Idea: align via a post-hoc wrapper

This is the approach that OpenAI took for GPT. The idea is akin to installing an aftermarket exhaust system on a new car, to tune behavior in a particular direction. For example, train an unconstrained LLM first; then tack on RLHFs training to align with human values. If all goes well, scale up this approach as we get to AGI and ASI.

Alas, it has been shown to be easy to jailbreak, with holes everywhere, as the world has witnessed on ChatGPT running GPT4. Finding issues and adding more constraints will end up as endless whack-a-mole. I’ve been there, for other AI problems. The root problem is that tacking a band-aid on such a core problem will (likely) never be enough.

Main: aligning an AI via a post-hoc wrapper is like adding an aftermarket exhaust system to your car. Bottom right: endless jailbreaks is like whack-a-mole, where as soon as you whack one issue, another pops up. 2.4.7 Idea: dumber AIs aligning smarter ones

This is the approach published by OpenAI in December 2023. The idea is to have a chain of AIs from dumb → smart, where each link is a dumber AI aligning the next-smarter AI.

Alas, this is only as strong as its weakest link (and links can be weak), there is risk of over-leverage (think 2008 financial crisis), and the ASI at the end of the chain might disagree or change the rules. Appendix 5.3 elaborates.

2.4.8 Idea: align the AI while training

Can we ever align something 1000x smarter than us? This idea side steps that concern in the near term in two complementary ways:

Diligently choose a training set based on strongly-held human values [Weng2023]. Start with 1x-level or even 0.1x-level as in a human baby. And then grow it to a child, then teenager, then adult, and beyond. Where it’s aligned the whole time [Goer2013].

This is akin to growing square watermelons, that grow subject to human-induced constraints 🍉 🤖. The hope — but not guarantee — is that as it goes from 1x to 10x and beyond, it remains aligned to human values.

This approach also assumes that data-centric learning will be the trick to get to ASI. It may be one of the most important, but maybe not the most important [Rutt2024b].

There’s promise to this idea; it’s worth trying.

2.4.9 Idea: bci/acc: get a competitive substrate via BCI

Silicon is a wildly powerful substrate: it already has amazing compute, storage and bandwidth and it keeps improving exponentially. It’s what’s powering AI, and soon, AGI and ASI.

This idea is: our current meatbag brains just can’t compete against silicon for processing power. It’s “1 person” of processing power vs 10 million.

Everything that silicon touches goes exponential: the “Silicon Midas Touch”. For our brains to compete with silicon, they must touch silicon. The higher bandwidth the connection, the more that our brains can unlock the power of silicon for our selves.

Therefore we need to swallow our pride, stop treating carbon like a deity, and get a competitive substrate: silicon. The specific “how” is brain-computer interfaces (BCIs), or uploading. The target is human superintelligence. Some call this “the merge”. Others, “intelligence amplification” (e/ai).

Given ASI timelines of 3–10 years, simply hoping for “the merge” means that the merge likely won’t happen fast enough. We need to accelerate it somehow. The options are BCIs or uploading. Where uploading is still mostly a scientific problem and way too far out to be relevant to ASI risk. In contrast, BCI has already matured past the science into engineering problems. Of the two, BCI is the most pragmatic.

We can’t just invent an amazing BCI technology. To truly counter ASI, we need to get it in the hands of the mainstream billions.

In short, we need to accelerate BCI and get it to mass adoption. This is what bci/acc is all about.

Another framing of bci/acc is (in one variant): is “align the AI at the core, as you train” but in the most hyper-localized way imaginable: train one AI for every single human, where each human is constraining the AI in real-time, and the AI starts small and grows iteratively. It’s a square-watermelon AI as a co-processor to your brain 🍉 🧠.

bci/acc: accelerate BCI and take it to the masses. It uses BCI killer apps like silent messaging (SMs) to create market demand, which in turn drive BCI device evolution to a substrate competitive with ASI 3. Human Superintelligence via bci/acc 3.1 Introduction

Accelerating BCI (bci/acc) is among the least-discussed approaches, yet it may have the best chance of success to address ASI. So it’s imperative that we explore bci/acc more deeply.

3.1.1 High-Bandwidth BCI challenges

To go all the way to human superintelligence, non-invasive BCI likely won’t have enough bandwidth. We will need super-high bandwidth BCI, via neural implants (invasive) or optogenetics (semi-invasive) or other such brain technologies.

Alas, going invasive or semi-invasive has its own challenges, on engineering, regulatory, and societal fronts:

Engineering. The main goal is to increase bandwidth — a hard enough thing on its own. Yet engineering must also solve critical privacy risks, lest we lose cognitive liberty. Regulatory. Getting approval for human trials on (semi) invasive brain technologies is currently a long, high-friction process in the name of safety of the test subjects. Alas, the current regulatory structure ignores the much larger risk of ASI risk to Humanity: a bike-shedding problem. How can we speed this up? Societal acceptance. Even if the devices existed and regulations were approved, invasive BCI currently feels icky to most people. This will affect Humanity’s ability to manage ASI risk. The Overton Window will likely need to shift so that mass society is more open to such technologies.

There are two different routes to solving challenges (1)(2)(3): implants-first and masses-first. Let’s explore each.

3.1.2 Implants-First Route

Elon Musk’s Neuralink has made great progress in the previous decade.

Tansu Yegen on Twitter: "🧠 Elon Musk announced the first successful Neuralink brain chip implant in a human. Think about telling someone 10 years ago that by 2024, we'd be on the brink of unlocking telepathy... pic.twitter.com/WHiL0GuCQw / Twitter"

🧠 Elon Musk announced the first successful Neuralink brain chip implant in a human. Think about telling someone 10 years ago that by 2024, we'd be on the brink of unlocking telepathy... pic.twitter.com/WHiL0GuCQw

Neuralink is perhaps the furthest along on engineering (1). Its path to regulatory (2) is to focus on healing people, which limits its speed. Societal acceptance (3) is on ice until regulatory (2) is much farther along. In short, its route is (full 1) → (full 2) → (full 3). While I’m a Neuralink fan, to maximize chance of success, I’d love to see more companies chase this route.

3.1.3 Masses-First Route

Given ASI timescales, the Neuralink route to (1)(2)(3) may not be fast enough. There’s another path: route (partial 1, full 2, full 3) → (better 1, full 2, full 3) → (full 1, full 2, full 3). That is: start with non-invasive BCI tech that has no regulatory issues, and get mass adoption. Use this mass adoption to grow societal openness and open up regulations towards (semi) invasive BCI.

The starting point is killer apps for healthy people with non-invasive tech.

Killer app. To hit the masses, BCI needs a killer app. We need to “make something people want”. Silent messaging (SMs) aka pragmatic telepathy is one candidate; perfect memory is another; there are more. Below, I explore candidate killer apps like silent messaging. Once we have that first killer app, we can expand to adjacent functionalities. Healthy people. To hit the masses, BCI needs to be optimizing healthy humans versus merely fixing human ailments. Otherwise it’s not mass-market enough. Non-invasive first. To hit the masses, the BCI needs to be non-invasive to start. Invasive won’t get enough takers at the beginning, and regulatory is a bottleneck. But to truly leverage the Silicon Midas Touch we must get to invasive. How? Pressure from market forces and ASI risk will take us over the hump. 3.1.4 Discussion & Outline

bci/acc allows for an implants-first route, a masses-first route, and other routes. We don’t know which will be best; we should explore all of them aggressively. Since Neuralink’s actions elaborate the implants-first route, much of this post will focus on the masses-first route. (To be clear, bci/acc includes all routes.)

The next sections elaborate on masses-first bci/acc as follows. First, I will briefly review some emerging technologies that will help. Then, I survey some candidate BCI killer apps. Then, I describe how demands from market forces and ASI risk will drive BCI performance up. Finally, I describe how many iterations take us to human superintelligence (HSI), a new phase for Humanity.

3.2 Baseline Tech for bci/acc

Humanity’s technology capability frontier keeps expanding. This section explores technologies on the market that are adjacent to bci/acc. They can be used as lego blocks towards launching the first BCI killer apps.

3.2.1 EEG for Typing (“Silent Messaging”)

EEG for typing keeps improving. As mentioned earlier, as of 2023 researchers could type via EEG at 62 words per minute. And it keeps getting better. How do you think Stephen Hawking wrote his books? (Yes, EEG.)

3.2.2 EEG for Focus

There are other companies targeting mainstream with EEG. For example, Neurable is making consumer BCI headphones to help people focus. You put on their headphones, which detect electrical signals on the skin around your ear, and they ping you when you fall out of focus. There’s also EEG to track emotions, alertness, arousal, meditation effectiveness, and more [Ref].

3.2.3 Subtitles on Glasses

XRAI, Vuzix and others offer glasses with subtitles, for the deaf: “hear with your eyes”. The glasses have a microphone to capture audio, then transcribes via AI-based voice recognition, then renders text to the subtitles display. The tech can be inexpensive since the subtitles can use 1970s-era LCD displays, and 99% of the rest can be on a smartphone.

3.2.4 AI-powered glasses with voice interface

11 years ago, we had Google Glass doing this. It was officially scrapped due to privacy concerns, and unofficially because society just wasn’t ready for it. Since then, we’ve had ten more years of smartphone evolution and adoption. We’re in an Instagram x TikTok era where privacy matters less, for better and for worse.

In October 2023, the Rayban | Meta Smart Glasses shipped. This device records and stores video directly from the glasses. You can tap it to send photos or videos to friends. There was no privacy or weirdness pushback. The Overton Window had shifted: 11 years was more than enough for society to be ready. From personal experience: they’re lightweight to wear, and according to Ray-Ban employees they’re selling briskly.

3.2.5 AR Goggles + Hand Gestures: Meta Quest 3

The Meta Quest 3 was released in October 2023. Whereas its predecessors were Virtual Reality (VR) goggles, it brings in the real world: Augmented Reality (AR), aka mixed reality or spatial computing. It scans your room, and renders real-plus-overlay into your headset’s display. It tricked my brain into “being there”. You can control it with hand gestures, but these are still unreliable; the Quest still supports handheld controllers.

3.2.6 AR Goggles + Eye Tracking: Apple Vision Pro

For any given device idea, Apple may iterate for years or decades before they release it, if ever. Why? Because they only release when the device not only “doesn’t suck”, but is actually pleasant or delightful to use. This was the case for phones, for tablets, and for cars (still a WIP).

It’s also the case for AR goggles. They have patents on AR going back two decades. Yet they finally put a device up for presale on Jan 19, 2024: The Apple Vision Pro. From Apple’s perspective, they’ve cracked AR well enough to release something pleasant or delightful.

What’s changed? Eye tracking based input. Eye tracking has been used for medical research for decades, and also more widespread things things like consumer marketing for 10+ years. You can use eye tracking to type, move a cursor, click a button, and more.

Apple Vision Pro has eye tracking. Knowing Apple’s approach to new devices, they probably already have interfaces to type, move a cursor, and click buttons — all hands free, accurate, and pleasant.

As it rolls out, there’s a good chance people will find it as magical as multi-touch in phones. Eye-tracking is to AR control, what multi-touch is to phones. It may be the remaining piece to take AR beyond video games and truly mainstream. And, it will become table stakes for AR; expect Quest 4 to have it.

I can’t emphasize this enough: eye-tracking may be the “unlock” that makes these head-mounted glasses or goggles actually useful.

3.2.7 Eye-Tracking is BCI (!)

Eye tracking offers the hands-free benefits of BCI with the accuracy of moving your hands. Eye tracking feels like BCI, moving your eyes doesn’t really feel like movement. Yet it’s nearly as accurate as moving your hands, because ultimately eye tracking is motor control.

If a 20-year-old university student’s eyes are bloodshot, there’s a good chance they are hungover, got little sleep, or both. To generalize this, our eyes tell a lot about our health. In the last few years, there’s been an explosion of research using HD images or videos for medical diagnosis or treatment. A recent “Frontiers in Neuroscience” edition had 23 articles dedicated to this topic, including this intro.

So: (1) Eye tracking takes HD videos of eyes (2) HD videos of eyes are sensors for brain activity (3)

HD video of eyes implies a BCI sensor. Modern eye-tracking takes HD videos of your eyes. Thus, modern eye tracking is BCI.

[Quote from Frontiers in Neuroscience] 3.3 Candidate BCI Killer Apps

We’ve covered how ASI is coming, and how Humanity’s best chance to stay competitive is to accelerate BCI and take it to the masses (bci/acc). To get BCI to mass adoption, we need an application of BCI that the masses really want to use — a killer app.

We don’t know which killer app might take off first. However, we can explore possibilities. This section reviews some of those.

3.3.1 Candidate killer app: Silent Messaging

Just as Neal Stephenson’s 1992 novel “Snow Crash” is the archetypical vision for Virtual Reality, Vernor Vinge’s 2006 novel “Rainbows End” is the archetype for Augmented Reality.

Infused throughout Rainbows End, there’s a special <sm> tag for when the characters are messaging each other with “silent messages” (SMs):

Vinge leaves the reader to infer what specifically SMs are. But one soon realizes that it’s messaging each other simply by thinking about it. Yes, telepathy, but presented as just part of the furniture, and it just works, therefore “pragmatic telepathy”.

SMing = silent messaging = sending text or voice by thinking about it. Send = eye-tracking / EEG / etc. Receive = subtitles on glasses.

How would we do this? One inputs messages via EEG BCI, eye-tracking, or subvocalization. One receives messages via subtitles on glasses or goggles, or audio in your ear.

Specific implementations are any combination of the above. Examples:

Glasses with subtitles + EEG BCI sensors on the top of the glasses touching your forehead inconspicuously An Apple Earbud-like device that captures sub-vocalizations, then synthesizes speech and outputs to others as audio. Apple Vision Pro for eye-tracking input and subtitles-based output. Therefore society may get (pragmatic) telepathy upon the release of Apple Vision Pro (!). 3.3.2 Candidate Killer App: Internal Dialog

Imagine Jiminy Cricket on your shoulder, sharing advice or facts when you call upon him. Without having to pull out your phone and type; without having to read results on your screen. “What’s the capital of Portugal?” “Is this person lying to me?” “What’s next on my TO-DO list today?”

To achieve this is straightforward: type with BCI / eye-tracking / subvocalization. It goes to a ChatGPT bot. And the output is rendered visually in the glasses / goggles or in audio.

3.3.3 Candidate killer app: Perfect Memory

Here, you record images / audio / video with glasses, goggles, or a necklace-style device like Rewind Pendant. This gets stored locally or globally.

You search for the recordings via EEG BCI, eye tracking, or sub-vocalization. Or, use near-infrared non-invasive BCI on the back of your scalp to see what’s going on in your visual cortex. It doesn’t need to be perfect; it just needs to be good enough to serve as a query across video feeds. Even ten years ago, research results were extremely promising.

Once you’ve found the memory, it gets rendered in the glasses or goggles.

You won’t have to retrieve by moving your fingers around or anything, you’ll just be moving your eyes around, or thinking with the EEG, and you’ll be able to retrieve these videos. Everything you saw, you’ll have perfect memory of. It will feel magical.

Perfect memory. (1) Record via glass gam, then store (2) retrieve via eye-tracking / EEG / etc (3) project result on glasses’ display 3.3.4 Candidate killer app: Share Visual Memories

Here, you search & retrieve videos like in the “perfect memory”.

Then, you click “share” and choose “to whom” via BCI / eye-tracking / sub-vocalization.

A picture’s worth a thousand words: we’ll be able to communicate with others at higher bandwidth than ever before.

3.3.5 Candidate Killer App: Talk in Pictures

Here, you share video to others, but no longer bound by what you’ve seen or found. Rather, you type (via BCI etc) to prompt a generative AI art system. You do this in real-time, and send the images / videos in real time to someone else. They see it and respond, in images / video.

Now. You’re. Talking. In. Pictures.

3.3.5 Candidate Killer App: Talk in Brain Signals

We go further than talking in pictures. If the devices always displayed raw brain signals alongside text or images, then over time our brains will learn the mapping. It won’t be much different than learning Spanish, sign language, or Morse code. Our brains can handle unusual inputs, like learning to see with your tongue. The net result: we could communicate directly with raw brain signals. AI research often finds “direct” to be better than using intermediate features, if there is enough data. It’s a brain-brain interface.

From this, a new kind of language — a neural language — could emerge, which will chunk lower-level abstractions into higher-level ones for higher bandwidth yet [Rutt2023c]. We’ll have transitioned from skeuomorphic languages for our brain (text/images as a bridge to the past, tuned for the outer world) to brain-native languages (tuned to our inner world).

This approaches the long-held science fiction dream of “mind meld” as “a telepathic union between two beings; in general use, a deep understanding.” We can start building primitive versions now.

Mind-meld: talk in pictures, raw brain signals, or a new neural language 3.4 The Journey to High-Bandwidth BCI

We’ve discussed the risk from ASI, how BCI is the most pragmatic path, BCI challenges (engineering, regulatory, societal), and possible BCI killer apps to kick-start usage by the masses. What then? This section explores how market forces and ASI risk will drive further evolution and adoption of BCI, including a transition to more invasive technologies.

3.4.1 Introduction

A silicon stack co-brain offers 100x+ more storage, and 100x+ more compute, compared to bio-stack brains (our current brains). Alas, these are held back by the low bandwidth between the bio-stack brain and the silicon-stack co-brain.

Non-invasive techniques like EEG, eye-tracking and subvocalization can only take us so far [BciTech]. There’s an upper bound to their bitrates; it’s not very high; and we’ll probably squeeze every last bit from them.

And. There are invasive techniques that promise 100x+ more bandwidth. Most promising are chip implants, and optogenetics. Let’s review those, then see how those might enter mainstream usage by healthy humans.

3.4.2 Bandwidth++ via Implants

Here, a doctor or machine opens up a portion of your skull, slips in a chip, and seals it back up. That chip then talks to your brain, and wirelessly to computers. 100x+ the bandwidth compared to EEGs, boom.

Research has happened for decades. Neuralink is a leading example. It’s in early stages of human trials.

Implants (conceptual) 3.4.3 Bandwidth++ via Optogenetics

Optogenetics enables reading & writing on the brain. One gets an injection containing a “useful virus” that changes specifically targeted neurons to fire when light is shined on them; and more. Put precisely:

“Optogenetics is a technique to control or to monitor neural activity with light which is achieved by the genetic introduction of light-sensitive proteins. Optogenetic activators [“opsins”] are used to control neurons, whereas monitoring of neuronal activity can be performed with genetically encoded sensors for ions (e.g. calcium) or membrane voltage. The effector in this system is light that has the advantage to operate with high spatial and temporal resolution at multiple wavelengths and locations”.

Optogenetics research is proceeding. As of 2021, there were four clinical trials involving optogenetics (on humans).

Optogenetics is promising for mass BCI because it’s less invasive than chip implants (injection vs surgery) and maybe more bandwidth (across the whole brain, yet fine-grained).

However, due to genetic manipulation and coaxing our brains to fire photons, many side effects are possible. For example, what if the brain fires too much and causes a seizure? Nonetheless, given ASI risk, research needs to proceed with even more urgency than before. It will need to get past a bike-shedding problem, as the next section elaborates.

Optogenetics (conceptual) 3.4.4 Invasive BCI regulation has a bike-shedding problem

None of the research on implants or optogenetics is (officially) aimed at healthy humans; it’s all for fixing human ailments.

Why? Because it’s already super-hard to get regulatory approval for human trials for the latter; going for the former has seemed unattainable.

Why? Put yourself in the shoes of a regulator. You’re used to balancing risk vs reward for a narrowly-scoped problem to fix a specific human medical ailment. You’re not used to balancing risk vs reward for a civilization-scoped issue, to avoid a non-medical existential risk for all Humanity. (Despite being the gatekeeper for that.)

So what do you do? You focus on what you know, and dismiss away the existential risk. This has a term: bike shedding. When a safety committee for a nuclear power plants spends 95% of its time discussing the bike shed because they aren’t equipped to do anything about the big hairy nuclear risk issue.

BCI research is being bike-shedded right now. I’m hopeful that this will change as regulators and their higher-ups recognize the issue.

3.4.5 What will tip invasive BCI into the mainstream?

Given the current regulatory constraints, how can invasive BCI accelerate into the mainstream? I see two main forces driving demand to make this happen: fixing ASI risk, and market forces.

ASI Risk. Ideally the regulators of large nations recognize the bike-shedding bias and reduce BCI restrictions, perhaps being super-aggressive to accelerate BCI via a “BCI Manhattan Project”. This could build on existing BCI-for-defense research like DARPA’s decades-long program.

Smaller hungry nations may take the lead, for the $ and the PR. There’s $ and PR incentive to nations that loosen rules to meet market demand, such as Estonia’s E-residency, China’s Shenzhen special economic zone, and Singapore’s crypto regulations. There’s a growing trickle in the medical domain. For starters, via the Zuzalu project, Montenegro recently lightened rules to catalyze longevity research. Most interestingly, Honduras already has very light rules for medical testing: Bryan Johnson recently leveraged it to get a novel gene therapy there; there’s nothing stopping aggressive BCI testing in Honduras. Growing movements like Network State and Blueprint will further catalyze this jurisdictional arbitrage for invasive BCI.

Market forces. Consumers who start with non-invasive BCI will demand more performance, therefore more bandwidth, which means invasive BCI. Thus, there’ll be a bottom-up consumer push for invasive BCI.

When consumers see others using BCI for medical treatment that receive benefits far beyond getting healthy again, they’ll get particularly insistent.

People with the $ who are ready to accept the high risk and high reward will fly to Honduras for medical invasive-BCI tourism. Or they’ll build their own, just as Professor Xavier built Cerebro BCI in X-Men. Military BCI will leak into criminals and the black market, then into mainstream to satiate demand, like in Strange Days and Cyberpunk: Edgerunners. Businesses will sprout up to get “medical BCI” in the hands of anyone who asks, like we saw for medical marijuana in California.

The $ and risk tolerance to get high-bandwidth BCI first will enjoy a significant advantage. This will raise legitimate questions about fairness. Ideally, cost and risk will come down quickly, to make it broadly accessible. Let’s see.

Left: Professor Xavier using Cerebro in X-Men. Middle: Spinal-implant BCI in Cyberpunk Edgerunners. 3.4.6 Mainstream invasive BCI will grow, a lot

We just covered how invasive BCI will tip into the mainstream. What happens next? It’s the economics, silly. There was great demand even before invasive BCI, despite limited bandwidth. Invasive BCI will unlock massive bandwidth, critically, to a market demanding it. So BCI growth rate will steepen.

The BCI market will merge with the $500B smartphone market, if it hadn’t already done it pre-invasive. iPhone 20 or 25 will be BCI-based, perhaps via a merge with Apple Vision Pro. Meta Quest 7 or 10 will get invasive BCI to complement eye-tracking and other non-invasive BCI. Neuralink will launch their “phone”. Expect Samsung, Microsoft, OpenAI and others to get in the game too. A lot of $ is up for grabs.

Shmoji on Twitter: "whenever people ask Elon why none of his companies have made a phone yet, he responds "Neuralink"You wont need a phone / Twitter"

whenever people ask Elon why none of his companies have made a phone yet, he responds "Neuralink"You wont need a phone

What then? The devices will evolve and improve, subject to intense competition, one generation to the next, like smartphones did in the past 40+ years.

Market forces and the Silicon Midas Touch drive performance. We’ll see 10x in bandwidth, which unlocks 10x+ more storage and compute. Then 100x in bandwidth, unlocking more storage and compute yet. Then, especially once we go semi-invasive/invasive with optogenetics or implanted chips, we’ll see 100x+ bandwidth, and corresponding 100x+ in storage and compute.

Moore’s Law, and AI improvements within BCI apps will further catalyze usefulness and demand. As a recent example, “BrainGPT” uses LLMs to interpret brain data for significant error reductions.

3.4.7 Your BCI will be part of *you*

We are all natural born cyborgs: when you ride a bike, it becomes part of you, as far as your brain is concerned. Same for keyboards.

The same will be true for BCI.

As far as your brain is concerned, your BCI — and the computers that you access — will be part of you.

3.4.8 Hyper-localized AI Alignment

You’ll be a cyborg with a bio-stack meatbag brain and silicon-stack brain working in unison. This feels as natural as using a keyboard or a bicycle.

The compute & storage of the silicon-stack will have its own AI-based automation, to abstract complexity from the bio-stack side. As its compute & storage grows, we can expect emergent intelligence (in the John Holland sense). Then a concern arises: could the silicon stack AI take over the bio stack? The alignment problem rears its head here too! Fortunately, there’s a natural solution.

To maximize the chance that the silicon stack stays aligned with us, we ensure that processing or storage does not outpace bandwidth, at each evolutionary step along the way. This is no guarantee however: what if the silicon-side starts accessing way more compute from the internet?

This is different from traditional AI alignment approaches: here we are aligning the AI in real time, aligning it with our selves. It’s hyper-localized to each of us. It’s one aligned AI per human, rather than 1 or 10 for the human race. Therefore it’s 10 billion times more fine-grained and personalized. It’s AI alignment taken to the limit (in the calculus sense). There’s no guarantee that this will work. But it’s highly promising.

3.5 The Journey to Human Superintelligence

This essay started with the ASI risk, and has shown a path to accelerate BCI and take it to the masses. The previous section showed how market- and risk-driven evolution took us to high-bandwidth BCIs. This section picks up from that.

At first, the silicon-stack brain’s power will be much weaker than the bio-stack one, bottlenecked by bandwidth. Then, we’ll increase bandwidth iteratively, with corresponding unlocks in compute and storage.

The silicon-stack side will get par with the bio-stack side.

Then it will start to surpass it.

We’ll keep going, as the market will demand it. [Rutt2024d]

The silicon-stack side will become radically more powerful than the bio-stack side.

And that will be fine with us! We’ll have gone through each BCI generation iteratively, vs being sprung on us all at once. Our worries will abate as the silicon-stack AI will be aligned.

In fact, the silicon-stack will feel like part of *you* as far as you’re concerned, because you’re a natural-born cyborg along with the rest of us. The emergent patterns of intelligence on the silicon-stack side will be wholly our own.

It will feel like the most natural thing in the world.

Each of us will grow our compute & storage by 10x, 100x, 1000x, more. The silicon-stack emergent patterns of intelligence — part of us — will grow 1000x too. Yet we will still be humans.

We will have grown to achieve human superintelligence.

There’s more. Let’s say you’ve got to 1000x storage & compute, 1000x intelligence via the silicon-stack side of your self. Let’s say you’re now 90 years old and on your deathbed. Your bio-stack body and brain is dying.

Yet your bio-stack brain is now only 1/1000 the intelligence of the silicon-stack side. It’s probably been annoying you for a while, perhaps holding you back. And now it’s really holding you back, lying there.

What do you do?

You clip it like a fingernail.
And now you are on a pure silicon stack.

There’s more. Consider the possibility that in 100 years (or 20) that the majority of intelligences will be on a silicon or post-silicon substrate. Some will have human origin, some will have pure AI origin, and some will have a mix. They will all be general; they will all be sovereign; they will all be superintelligent. They are Sovereign General Intelligences (SGIs).

What will the landscape look like? Hyperion Cantos provides inspiration. SGIs will inhabit the datumplane: “common ground for man, machine, and AI.”

The datumplane: common ground for man, machine and AI

So we have a path to unbound ourselves from biological constraints, while retaining our humanity. Which makes it a great time to ask. How big can you dream? What’s the biggest thing that civilization could possibly achieve? What do we want, as Humanity?

I mean Humanity in the broadest sense of the word: not just humans, but the multiple layers of civilization that encompass humans. Our thoughts and dreams, our patterns of intelligence, and how we want to self-actualize as a civilization.

As a baseline, we definitely know we don’t want to die, whether from asteroid strikes, nuclear holocaust or AIs terminating us all. “Not die” is a starting point. Accelerating BCI helps address all of those, because it allows us to easily be multi-planetary and be competitive with pure AIs.

“Not die” is an “away” viewpoint. Can we be more positive than that, with a “towards” perspective? Several steps more optimistic is: explore the cosmos, Star Trek style. That would be a grand adventure for Humanity on its own.

We can do better yet: let’s reshape the cosmos! Build Dyson Spheres to harness the power of stars directly — Kardashev Scale Type II. Reshape the cosmos at the scale of galaxies (Type III). Master energy at the level of the universe (Type IV). Even perhaps attain the knowledge to manipulate the universe at will (Type V). Now that would be an adventure for Humanity! Count me in [Kard].

A grand adventure for Humanity: explore and reshape the cosmos! 3.6 Cognitive Liberty

It’s one thing for the surveillance state or surveillance capitalism to monitor our electronic lives, as they do now. We’ve almost come to terms with it, whether we like it or not. But what about monitoring our thoughts? This will be a red line for most people, and rightly so. If our thoughts cannot be private, we risk freedom and personal sovereignty.

This is the concept of “cognitive liberty” or the “right to mental self-determination”: the freedom of an individual to control their own mental processes, cognition, and consciousness. I was happily surprised to discover a book-length exposé of the issue in “The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology” by Nita A. Farahany.

A Web3 framing is: “Your keys, your thoughts. Not your keys, not your thoughts”. Web3 points to a potential starting-point too: you hold your keys to your data. And use infrastructure like like Arweave to store your brain data, and Ocean Protocol to manage your brain data. Appendix 5.1 elaborates. But this is only a partial solution; there will be many devilish challenges that need to be worked out. For example: if you hold the keys in your head, will the BCIs see those thoughts too? There’s dozens or thousands of man-years worth of R&D needed here.

It’s hard to understate the importance of cognitive liberty. We need more work on this, asap. I’d love to see more funding to research efforts here, not to mention BCI-acceleration efforts in general.

In an age of BCI, how do we protect our thoughts and retain cognitive liberty? 4. Conclusion

ASI is coming, perhaps in 3–10 years. Humanity needs a competitive substrate, in time for this. BCI is the most pragmatic path. Therefore, we need to accelerate BCI and take it to the masses. That is bci/acc.

The “masses-first” bci/acc variant is to bring non-invasive BCI with killer apps like silent messaging to healthy people first; then to use market momentum to get over the invasive-BCI hump; and finally, to keep growing the power of each human’s bio-stack and silicon-stack brains. Looping this repeatedly, the net result is human superintelligence (HSI).

In perhaps 100 years (or 20) the majority of intelligences will be on a silicon or post-silicon substrate. Some will have human origin, some will have pure AI origin, and some will have a mix. They will all be general; they will all be sovereign; they will all be superintelligent: they are Sovereign General Intelligences (SGIs). They’ll be reshaping and exploring the cosmos, climbing the Kardashev scales.

bci/acc is solarpunk: optimistic and perhaps a little gonzo. It’s e/acc zoomed-in for BCI. And it could be a grand adventure for Humanity.

5. Appendices 5.1 Appendix: Sovereign Web3 Software

This section describes how many state-of-the-art Web3 systems are already sovereign — beholden to no one — and how Web3 capabilities will keep expanding for powerful sovereign agents.

Decentralized Elements of Computing. A blockchain is a network of machines (nodes) that maintains a list of transactions. It’s decentralized: no single entity owns or controls the network. With a decentralized list of transactions, we can then decentralize the elements of computing: storage, compute, and communications. “Web3” is a more accessible term than “blockchain”, but they basically mean the same thing.

Storage of Value. In blockchains, storage comes in two parts: storage of value, and storage of data. We already have a wonderful decentralized store of value, ie “digital gold”: Bitcoin which stores BTC tokens. Released in 2009, it has a market cap > $700B and tens of millions of users. Just as Bitcoin has BTC as its native token, Ethereum has ETH, and so on; each are stores of value. Finally, there are tokens as decentralized scripts on top of a chain (e.g. ERC20). Storage of Data. Smallish amounts of data can live on a chain itself; that’s how value is stored. We also have larger-scale decentralized data storage: Arweave and Filecoin are the leading projects. We have decentralized access control to that data via Ocean Protocol, and decentralized data feeds via Chainlink [LINK]. Compute (Processing). The first really great decentralized compute system was Ethereum which came out in 2015. It runs smart contracts, which are simply small scripts running on decentralized compute hardware. It’s pretty expensive to do compute directly on Ethereum smart contracts, so there are ways that it’s scaling up. These include (a) more powerful “Layer 1” chains like Solana, (b) “Layer 2” chains, especially “Zero-Knowledge Rollup” L2s which enable compute to be off-chain with provable compute results stored on-chain, and (c ) decentralized compute markets like iExec and Golem, and many more recent ones including AI-specialized ones. Communications. Being decentralized networks, all blockchains have an element of communications built in. And, there are multi-chain protocols like Cosmos or Polkadot, and cross-chain protocols like THORchain, CCIP, and Chainflip.

Smart Contracts & Chains are Sovereign. Perhaps surprisingly, every smart contract running on a chain is sovereign 🤯 [Gov]. For example, Uniswap V2 is a popular decentralized exchange. Each Uniswap pool — say ETH/USDT — has its own smart contract. Each of those pools is a robot that just “does its thing”: holding liquidity, giving some USDT for people who bring it ETH, and giving some ETH for people who bring it USDT. There are no humans helping it, it “just runs”, you can’t turn it off, it’s not beholden to any specific individual, organization or jurisdiction. It is sovereign.

Each chain is sovereign too. Each chain is beholden to no one. It’s why Bitcoin can be framed as a life form.

These sovereign smart contracts and chains can do all the usual things: store & manage wealth, store & manage data, do compute, and communicate. Uniswap and Bitcoin answer to no one.

And, they have rights! While no one lobbied for these robots’ rights, and no law was created for these robots’ rights, they have rights nonetheless, because they can manipulate resources without asking. How? because the technology itself allows for it: it’s a dry code, not wet code approach to rights. It’s “your keys, your Bitcoinfor robots themselves. Do you get it yet anon?

AI & Agents for Web3. So far, chain-based robots haven’t been very smart. But this is changing as Web3 capabilities grow. Some examples:

Prediction is the essence of intelligence. Ocean Predictoor is a recent system for prediction feeds, powered by AI prediction bots and consumed by AI trading bots. The feeds themselves are sovereign; the bots can be too. Fetch.ai and SingularityNET are Web3 systems to run decentralized agents (bots). These agents can be sovereign: no one owns or controls.

The above AI * Web3 projects are by OGs in both AI & Web3. Advances in Web3 storage, processing, and communication have helped their capabilities. And the recent explosion in AI interest has brought a large new wave of AI * Web3 projects.

5.2 Appendix: AI Alignment Bingo

In 2022, Rob Bensinger tweeted the following text and image. It’s become a useful (and hilarious) reference in many AI & alignment crowds.

“Bad take” bingo cards are terrible, because they never actually say what’s wrong with any of the arguments they’re making fun of. So here’s the “bad AI alignment take bingo” meme that’s been going around… but with actual responses to the “bad takes”!
5.3 Appendix: Issues on ASI Risk Idea “dumber AIs aligning smarter ones”

This is the approach published by OpenAI in December 2023. The idea is to have a chain of AIs from dumb → smart, where each link is a dumber AI aligning the next-smarter AI.

Here, we elaborate on the issues.

“Chain risk” framing. Each link needs to have crazy-high reliability, which likely isn’t achievable. Probability of failure = 1–(probability of failure of link 1, pfail 1) * (pfail 2) * (pfail 3) * … * (pfail n) * (probability of failure of non-link components); assuming independent pfails. E.g. if there are 5 links in the chain and perfect reliability for non-link components, and you want <1% chance of failure, then each link must have <1e-5 (0.001%) chance of failure. “Over-leverage risk” framing. This can be seen as over-leverage risk too. The 2008 financial crisis illustrates how over-leverage can go badly wrong. In 2008, there was a chain of derivatives on housing mortgages like Credit Default Swaps, which amplified billions into tens of trillions: home mortgage → 10x derivative → 100x derivative → 1000x derivative. Any fluctuation in home mortgages, such as slight changes to interest rates, rippled to 1000x effects downstream. Smartest entity might disagree. Rhetorically, could an ant align a bee → align a mouse → dog → align a chimpanzee → align a human? If you’re the human, would you let this happen? Smartest entity could change rules of weaker layers. It’s the smartest entity not just disagreeing, but actively pushing the other layers to its own benefit. In the 2008 financial crisis, to make more $, the bankers at the top (final link) were highly incentivized to grow the $ volume of mortgages (first link). This resulted in craziness. For example, a strawberry picker husband & wife with < $15,000 combined annual income obtained a loan for a $720,000 house, with no money down. They had no hope of paying back the loan; the chain couldn’t last; the chain broke; the 2008 financial meltdown happened.

To summarize, solving ASI risk with a chain of AIs has great risk on its own.

Acknowledgements

Thank you very much to the following people for review, discussion, and feedback on these ideas and this essay in particular: Jim Rutt, Mark Meadows, Albert Wenger, Lou de Kerhuelvez, Jeff Wilser, Kalin Harvey, Bruce Pon, Joel Dietz, Shan Sundaramahalingam, and Jeremy Sim.

And, thank you to Mark Meadows for the opportunity to share the ideas with NASA, and Lou de Kerhuelvez & Allison Duettman for the opportunity to share with Foresight Institute. Finally, thanks to the e/acc movement for the courage and optimism. (And for inspiration of “bci/acc” label, it’s an improvement on “Bandwidth++”).

Notes

[Spec1999] The fair was Spectrum 1999, held every four years at the University of Saskatchewan’s College of Engineering. Some people skied no better than random; others had 0% error. This was a useful lesson on the high variability among people in BCI accuracy. I found the same thing in experiments on other BCI devices too.

[Tsh2012] The researchers’ tricks included: more sensors, active not passive sensing (visual evoked potentials), maximize rate of neural firing, error correction codes, and AI

[Rutt2024a] Thanks to Jim Rutt for this specific framing. To expand on Jim’s words, lightly edited: “it’s not just 1000x more powerful but qualitatively different. The ASI could actually deeply understand in total detail even the most complex book. That’s very different from how humans create a rough highly compressed representation. Human memories are faulty and low fidelity; machines are not. Clock speed is 1 ms (1e-3 s) for neurons, and sub-nanoseconds (<1e-9 s) for chips. Today meat minds have a huge advantage in parallelization, but that will eventually be solved in silicon.”

[Hyp1989] Many sci-fi novels explore potential relations between humans and ASIs, where they act as gods to humans. The Hyperion Cantos and A Fire Upon the Deep are two prominent examples; there are more.

[Verd2023] “Guillaume Verdon: Beff Jezos, E/acc Movement, Physics, Computation & AGI”, Lex Fridman Podcast #407, Dec 29, 2023 https://www.youtube.com/watch?v=8fEEbKJoNbU

The e/acc movement has a second argument : the general idea of deceleration runs against the physics tendency of entropy to shrink over time. On earth, this manifests as evolution in biology (ability to acquire resources and reproduce), and as evolution in human organization (ability to acquire resources and reproduce — capitalism). Even in the highly-unlikely event somehow all the above deceleration efforts were successful, in the medium term, entropy and evolution will route around these anyway.

[Snow2013] Remember, Edward Snowden’s 2013 revelations didn’t stop the goals of PRISM surveillance. Now, USA and its allies simply get the data via big tech companies vs. directly.

[Weng2023] From private conversation with Albert Wenger in late 2023, soon to be public.

[Goer2013] This is an oft-repeated example by Ben Goertzel, from 2013 and likely earlier.

[Rutt2024b] Thanks to Jim Rutt for this idea, and inspiration for the wording. (Private correspondence.)

[Rutt2024c] Thanks to Jim Rutt to help develop this idea.

[BciTech] These include functional near-infrared spectroscopy (fNIRS), functional magnetic resonance imaging (fMRI), transcranial stimulation like TMS (magnetic) and tFUS (focused ultrasound), and more. Each has its own strengths and weaknesses. bci/acc may use any of these. Endovascular BCI has a particularly promising tradeoff of minimally-invasive yet high-signal.

[Rutt2024d] A great question from Jim Rutt, lightly edited: “While the market is an excellent hill climber, there is no guarantee at all that it finds global maxima. Maybe there ought to be a political/social layer. Is this what humanity wants?”

[Kard] Once bci/acc unbounds humanity 100% from biological constraints, the “bci” part is less important. bci/acc generalizes back into e/acc. These Kardashev-scale goals are 100% in-line with the goals of e/acc.

Also: If we were bound by our bio stack, there would have been a hitch. The nearest star beyond our sun is Proxima Centauri. It takes 4.3 years to get there if you’re traveling at light speed. OK, doable. However, if you travel at Voyager speed — the man-made device that’s gone the fastest in space so far — it will take 73,000 years. That’s a time scale 10x larger than the time since the Greeks. Less practically doable. As sci-fi author Charlie Stross has said “Sending canned primates was never going to end happily”. Good thing we’ve unbound ourselves from our bio stack!

[LINK] Chainlink got its start doing decentralized data feeds. As it’s grown, its scope has expanded to much more.

[Gov] Assuming no governance, which is true for a lot of smart contracts. Uniswap V2 has no governance, though V3 does.

bci/acc: A Path to Balance AI Superintelligence was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


liminal (was OWI)

Catching the Age Wave: Stay Ahead of the Regulatory Surge Before Typical KYC Solutions Wash You Out

The post Catching the Age Wave: Stay Ahead of the Regulatory Surge Before Typical KYC Solutions Wash You Out appeared first on Liminal.co.

Tuesday, 08. October 2024

TBD on Dev.to

TBD x Hacktoberfest

With October blazing through, we're greeted by pumpkin spices, the aroma of fall leaves drifting in the rain, and of course, the much-anticipated Hacktoberfest. Whether you're a seasoned contributor or new to open source, there's something for everyone. 🎉 We're Participating in Hacktoberfest 2024! We have several projects with a variety of issues that we'd l




With October blazing through, we're greeted by pumpkin spices, the aroma of fall leaves drifting in the rain, and of course, the much-anticipated Hacktoberfest. Whether you're a seasoned contributor or new to open source, there's something for everyone.

🎉 We're Participating in Hacktoberfest 2024!

We have several projects with a variety of issues that we'd love your contributions for! For each issue that's merged, you'll earn points towards the TBD Hacktoberfest Leaderboard. Winners will receive exclusive TBD Hacktoberfest 2024 swag!

We're kicking off Hacktoberfest with more events:

October 10: Twitter Space - Hacktoberfest Rust Projects October 10: Exploring an AI-Powered GitHub Action

Be sure to add them to your calendar.

📌 What is Hacktoberfest?

Hacktoberfest is a month-long (October) celebration of open source software. It's sponsored by DigitalOcean, GitHub, and other partners. Check out Hacktoberfest's official site for more details and to register. Registration is from September 23 - October 31.

📂 Dive into TBD's Participating Projects

We included a wide variety of projects and issues for Hacktoberfest 2024. Each of our participating repos has a Hacktoberfest Project Hub, which contains all issues you can pick up with the hacktoberfest label. For easy reference, repos with multiple projects will have multiple project hubs.

Explore our participating repos below and see where you can make an impact:

developer.tbd.website

Languages: MDX, JavaScript, CSS, Markdown Project Description: Docusaurus instance powering the TBD Developer Website (this site). Links: Hacktoberfest Project Hub | Contributing Guide

web5-js

Language: TypeScript Description: The monorepo for the Web5 JS TypeScript implementation. It features libraries for building applications with decentralized identifiers (DIDs), verifiable credentials (VCs), and presentation exchange (PEX). Links: Hacktoberfest Project Hub: Protocol Explorer | Hacktoberfest Project Hub: General | Contributing Guide

web5-rs

Language: Rust Description: This monorepo houses the core components of the Web5 platform containing the core Rust code with Kotlin bindings. It features libraries for building applications with decentralized identifiers (DIDs), verifiable credentials (VCs), and presentation exchange (PEX). Links: Hacktoberfest Project Hub | Contributing Guide

dwn-sdk-js

Language: TypeScript Description: Decentralized Web Node (DWN) Reference implementation. Links: Hacktoberfest Project Hub | Contributing Guide

DWA Starter

Language: JavaScript Description: Decentralized Web App (DWA) starter collection. Links: Hacktoberfest Project Hub: VanillaJS | Hacktoberfest Project Hub: Vue | Contributing Guide

DIDPay

Languages: Dart Description: Mobile app that provides a way for individuals to interact with PFIs via tbDEX. Links: Hacktoberfest Project Hub | Contributing Guide

DID DHT

Language: Go Description: The did:dht method and server implementation. Links: Hacktoberfest Project Hub | Contributing Guide

DCX

Languages: TypeScript, JavaScript Description: A Web5 Protocol for Decentralized Credential Exchange. Links: Hacktoberfest Project Hub | Contributing Guide

Goose Plugins

Language: Python Description: Plugins for Goose, an AI developer agent that operates from your command line. Links: Hacktoberfest Project Hub | Contributing Guide

Fllw, Aliased

Languages: TypeScript, JavaScript Description: A reference app for building Decentralized Web Apps. Links: Hacktoberfest Task: Fllw | Hacktoberfest Task: Aliased

Hot Tip
Not a coder? No worries! developer.tbd.website has tons of non-code related issues up for grabs.

📝 Guide to TBD x Hacktoberfest 2024

✅ Topic Check: Contribute to projects that have the hacktoberfest label. This ensures your PR counts towards the official Hacktoberfest prizes.

🏷️ Label Insights:

Start with an issue labeled hacktoberfest and comment ".take" to assign yourself the issue. After submitting a PR and having it approved, the PR will be labeled hacktoberfest-accepted and you'll receive points on our leaderboard and credit towards the global Hacktoberfest 🎉 If your PR is marked with a spam or invalid label, re-evaluate your contribution to make it count.

🥇 Code and Conduct: Adhere to our code of conduct and ensure your PR aligns with the repository's goals.

🫶 Community Support: Engage with fellow contributors on our Discord for tips for success from participants!

🆘 Seek Help: If in doubt, don't stress! Connect with the maintainers by commenting on the issue or chat with them directly in the #🎃┃hacktoberfest channel on Discord.

🎁 Leaderboard, Prizes and Excitement

Be among the top 10 with the most points to snag custom swag with this year's exclusive TBD x Hacktoberfest 2024 design! To earn your place in the leaderboard, we have created a points system that is explained below. As you have issues merged, you will automatically be granted points.

💯 Point System Weight Points Awarded Description 🐭 Small 5 points For smaller issues that take limited time to complete and/or don't require any product knowledge. 🐰 Medium 10 points For average issues that take additional time to complete and/or require some product knowledge. 🐂 Large 15 points For meaty issues that take a significant amount of time to complete and/or possibly require deep product knowledge. 🏆 Prizes The top 10 contributors with the most points will be awarded TBD x Hacktoberfest 2024 swag from our TBD shop. The top 3 contributors in our top 10 will be awarded very limited customized TBD x Hacktoberfest 2024 swag with your github username on it. Stay tuned to our Discord for the reveal!

Keep an eye on your progress via our Leaderboard.

🎙️ Livestreams & Office Hours

Dive into our jam-packed Hacktoberfest schedule! Whether you're just here for fun or are focused on learning everything you can, we've got you covered:

Every Tuesday, Community Office Hours - Join us every Tuesday at 1pm ET for the month of October, where we will go over PR reviews, live Q&A, and more. This event occurs on Discord.

Twitter Space: Hacktoberfest Rust Projects - Join Staff Developer Advocate @blackgirlbytes & Software Engineer @kendallweihe this Thursday at 12pm ET, where you can learn about our core Rust SDK with Kotlin bindings and contributions we're seeking for this project. This event will be live on our Twitter.

Exploring an AI-powered GitHub Action - Join Head of Engineering Michael Neale & Staff Developer Advocate @blackgirlbytes this Thursday at 5pm ET, to learn more about an AI-powered action made by Goose, an AI developer agent that operates from your command line.

Live Events Calendar - Keep tabs on our Discord or developer.tbd.website for our future events & sneak peeks - we're always cooking up something new!

📚 Resources for First-Time Contributors 📖 How to Contribute on GitHub 🛠 Git Cheatsheet 🔍 Projects Participating in Hacktoberfest

Happy hacking and cheers to Hacktoberfest 2024! 🎉


Spruce Systems

Meet the SpruceID Team: Jacob Healy

Jacob leverages his experience in managing complex software implementations to drive successful project execution, working with his team to transform ideas into impactful solutions for clients.
Name: Jacob Healy
Team: Product Delivery
Based in: Arvada, Colorado About Jacob

My journey began with math and being interested in problems, but no problem in particular.  I was fortunate to find a great company solving important problems early in my career, where I could dive into development and IT implementation for government agencies.  That led to managing large-scale software implementations, where I landed on the particular problem of just how to get complex things done - process development and building strong teams.

I eventually decided to move on to SpruceID as the Product Delivery Lead because I think that digital identity is a particular problem that needs to be solved, and SpruceID is able to do it, do it right, and affords me the opportunity to contribute significantly and quickly to that goal.

Can you tell us about your role at SpruceID?

At SpruceID, I tackle the gap between “we should do this” and “we did this.” As the one ultimately accountable for the success of project execution across the organization, I have the opportunity to wear many hats, the privilege of working closely with everyone in the organization, and the honor of collaborating directly with and delivering value to all of our clients.

What do you find most rewarding about your role?

The most rewarding aspect of my role is seeing the tangible impact of our work on government agencies and, ultimately, the public. Implementing innovative digital credentialing solutions that streamline processes and enhance security gives me a great sense of accomplishment. Additionally, building, leading, and collaborating with a high-performing team that consistently delivers successful outcomes is incredibly fulfilling.

What are some of the most important qualities for someone in your role to have, in your opinion?

Adaptability, a steady hand, and the ability to learn quickly and hold context.  Seeing the forest through the trees is critical for overall success. It's important to navigate complex projects, engage effectively with diverse stakeholders, and make informed decisions that balance various priorities, often in real-time. Being able to hold the line and understand both technical and business aspects is essential. 

What are you currently learning, or what do you hope to learn?

It feels like everything, all the time. There is always something new in the digital identity space but being in such a fast moving startup keeps me on my toes in all ways.  I learn something new everyday from my colleagues and partners about scaling teams, processes, and tech.

What has been the most memorable moment for you at SpruceID so far?

There have been many, but one that stands out was kicking off our work with the State of Utah. It was really fun to go to Salt Lake City and meet with some great technologists, talk about verifiable digital credentials, and explore how the Department of Natural Resources could use them for off-highway Vehicle Permits. And then seeing it all launch later, of course. 

What's the best piece of advice you’ve received since starting here?

The best piece of advice I've received since starting at SpruceID is to embrace a growth mindset.  The special twist, or magnification, a startup has on this has given me a new point of view.  I've learned that viewing challenges as opportunities to learn rather than obstacles encourages innovation and resilience. This perspective has empowered me to tackle complex projects with confidence and to continuously develop both personally and professionally.

What is some advice that you’d give to someone in your role who is early in their career?

Embrace continuous learning and remain adaptable. The technology landscape is always evolving, so staying curious will help you navigate changes effectively. Building strong relationships with your team and stakeholders is also key to successful project delivery.  Maybe more than anything though, pay attention.  You never know when that random conversation or piece of information will be the exact thing you need.  Good enough and great outcomes are often separated by Slumdog Millionaire Moments. 

How do you define success in your role, and how do you measure it?

The easy answer here is to make sure we meet the objective, have the impact we intended, and take into account our strategic goals.  I like to add some qualifiers, though, the ends do not always justify the means.  How does the team feel? How does the client feel? Did we do right by the customer and ourselves?  If yes, then success. 

Fun Facts

What do you enjoy doing in your free time?: Raising my kids, trying to make the most of every minute.

What is your favorite coding language (and why?): My favorite coding language is whatever gets the job done when and how it needs to be done. I defer this to the wiser, more engineering-focused minds. 

If you could be any tree, what tree would you be and why?: If I could be any tree, I would be an Ent (from the world of Tolkien). Ents are wise guardians of the forest—embodying strength, resilience, and a deep sense of responsibility. They are not just passive observers; they take action to protect what they care about. Similarly, I strive to be as thoughtful of a leader as I can, who actively works to safeguard and nurture the people I work with and projectsI work on, pursuing growth and harmony.

Interested in joining our team? Check out our open roles and apply online!

Apply to Join Us

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.